When I start passenger, multiple processes have the same connection.
bundle exec passenger-status
Requests in queue: 0
* PID: 13830 Sessions: 0 Processed: 107 Uptime: 1h 24m 22s
CPU: 0% Memory : 446M Last used: 41s ago
* PID: 13909 Sessions: 0 Processed: 0 Uptime: 41s
CPU: 0% Memory : 22M Last used: 41s ago
ss -antp4 | grep ':3306 '
ESTAB 0 0 XXX.XXX.XXX.XXX:55488 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=14),("ruby",pid=13830,fd=14),("ruby",pid=4672,fd=14)) #<= 4672 is preloader process?
ESTAB 0 0 XXX.XXX.XXX.XXX:55550 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13830,fd=24))
ESTAB 0 0 XXX.XXX.XXX.XXX:55552 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=24))
Is the connection using port 55488 correct?
I believe that inconsistencies occur when multiple processes refer to the same connection. But I can't find the problem in my application.
I am using Rails 4.x and passenger 6.0.2
Related
My goal is to pull the key items for my servers that we are tracking for KPIs. My plan is to run this daily via a cron job and then have it email me once a week to be able to be put in an excel sheet to grab the monthly KPIs. Here is what I have so far.
#!/bin/bash
server=server1
ports=({8400..8499})
for l in ${ports[#]}
do
echo "checking on '$l'"
sp=$(curl -k --silent "https://"$server":"$l"/server-status" | grep -E "Apache Server|Total accesses|CPU Usage|second|uptime" | sed 's/<[^>]*>//g')
echo "$l: $sp" >> kpi.tmp
grep -v '^$' kpi.tmp > kpi.out
done
The output shows like this.
8400:
8401: Apache Server Status for server1(via x.x.x.x)
Server uptime: 18 days 4 hours 49 minutes 37 seconds
Total accesses: 545 - Total Traffic: 15.2 MB
CPU Usage: u115.57 s48.17 cu0 cs0 - .0104% CPU load
.000347 requests/sec - 10 B/second - 28.6 kB/request
8402: Apache Server Status for server 1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 26 seconds
Total accesses: 33 - Total Traffic: 487 kB
CPU Usage: u118.64 s49.41 cu0 cs0 - .00968% CPU load
1.9e-5 requests/sec - 0 B/second - 14.8 kB/request
8403:
8404:
8405: Apache Server Status for server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 28 seconds
Total accesses: 35 - Total Traffic: 545 kB
CPU Usage: u133.04 s57.48 cu0 cs0 - .011% CPU load
2.02e-5 requests/sec - 0 B/second - 15.6 kB/request
I am having a hard time figuring out how to filter the out put to the way i would like it. As you can see from my desired output, if it does not have any data to not put it in the file, cut some of the info out of the returned data.
I would like my output to look like this:
8401:server1(via x.x.x.x)
Server uptime: 18 days 4 hours 49 minutes 37 seconds
Total accesses: 545 - Total Traffic: 15.2 MB
CPU Usage: .0104% CPU load
.000347 requests/sec - 10 B/second - 28.6 kB/request
8402: server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 26 seconds
Total accesses: 33 - Total Traffic: 487 kB
CPU Usage: .00968% CPU load
1.9e-5 requests/sec - 0 B/second - 14.8 kB/request
8405: server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 28 seconds
Total accesses: 35 - Total Traffic: 545 kB
CPU Usage: .011% CPU load
2.02e-5 requests/sec - 0 B/second - 15.6 kB/request
I believe that all i'd need to do to resolve this is to set SSM inside of Image Builder to use my proxy with the environment variable -> HTTP_PROXY = HOST:IP
for example, I can run this on another server where all traffic is directed through the proxy:
curl -I --socks5-hostname socks.local:1080 https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
Here's what Image builder is trying to do and failing (before any of the image builder components are ran):
SSM execution '68711005-5dc4-41f6-8cdd-633728ca41da' failed with status = 'Failed' in state = 'BUILDING' and failure message = 'Step fails when it is verifying the command has completed. Command 76b55646-79bb-417c-8bb6-6ee01f9a76ff returns unexpected invocation result: {Status=[Failed], ResponseCode=[7], Output=[ ----------ERROR------- + sudo systemctl stop ecs + curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o /tmp/imagebuilder_service/awscli-bundle.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 0 0 0 0 0 0 0 0 ...'
These env vars are all that should be needed, the problem is that i see no way to add them (similarly to how you would in CodeBuild):
http_proxy=http://hostname:port
https_proxy=https://hostname:port
no_proxy=169.254.169.254
SSM Agent does not read environment variables from host, you would need to provide the environment variables in the file below and restart ssm agent
On Ubuntu Server instances where SSM Agent is installed by using a snap: /etc/systemd/system/snap.amazon-ssm-agent.amazon-ssm-agent.service.d/override.conf
On Amazon Linux 2 instances: /etc/systemd/system/amazon-ssm-agent.service.d/override.conf
On other operating systems: /etc/systemd/system/amazon-ssm-agent.service.d/amazon-ssm-agent.override
[Service]
Environment="http_proxy=http://hostname:port"
Environment="https_proxy=https://hostname:port"
Environment="no_proxy=169.254.169.254"
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-proxy-with-ssm-agent.html#ssm-agent-proxy-systemd
I’ve a cluster with cronjob which is running Ok, I schedule the cronjob to run on every 3 min
and I notice that sometimes the jobs are not running at all for 6-9 min , (two or three intervals ) .
This happen several times a day and I'm not sure why , how can I check what is the problem can be ?
is there a way to overcome this ?
we use k8s 1.14.7
This is the cronjob
I try to change also the interval to 10 min and still I see this pattern , i.e. several times a day (for 20/30 min - 2-3 intervals )the job is not running .
the job execution time is just 30 sec and does not run in parallel (it run like a singleton job )
the logs (for the running jobs) doesn't show anything
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: fdc-job
namespace: {{required "Mon namespace variable '(.Values.mon.namespace)' is required" .Values.mon.namespace}}
spec:
suspend: false
schedule: "*/3 * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
startingDeadlineSeconds: 10
jobTemplate:
spec:
backoffLimit: 1
template:
spec:
serviceAccountName: mon-sa
containers:
- name: cluster-check
image: {{required "A valid .Values.artifactory.host entry required!" .Values.artifactory.host}}/{{(.Values.mon.image.repository)}}:{{(.Values.mon.image.tag)}}
args: [“fdc"]
restartPolicy: Never
activeDeadlineSeconds: 100
imagePullSecrets:
- name: docker-images-secret
update
running command: kubectl get pods return the following for the last 40 min ...
cluster-1569490560-b78nw 0/1 Completed 0 39m
cluster-1569490740-8gcwl 0/1 Completed 0 36m
cluster-1569490920-t9hwj 0/1 Completed 0 33m
cluster-1569491280-qz5sp 0/1 Completed 0 27m
cluster-1569491460-r2dwv 0/1 Completed 0 24m
cluster-1569491640-qn7r8 0/1 Completed 0 21m
cluster-1569492180-vkxcs 0/1 Completed 0 12m
cluster-1569492360-ksn7s 0/1 Completed 0 9m41s
cluster-1569492540-qqwwc 0/1 Completed 0 6m40s
cluster-1569492720-v2dr2 0/1 Completed 0 3m40s
as you can see the job run every 3 min and you can see that
15 / 18/ 30 min doesn't shown as they don't executed , any idea ?
in addition, i've upgraded the version of k8s to 1.15.4 which doesn't solve the problem
The command kubectl get cronjobs returns
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cluster */3 * * * * False 0 2m49s 10d
Any clue or direction will be very helpful...
I am experimenting with launching rsync from QProcess and although it runs, it behaves differently when run from QProcess compared to running the exact same command from the command line.
Here is the command and stdout when run from QProcess
/usr/bin/rsync -atv --stats --progress --port=873 --compress-level=9 --recursive --delete --exclude="/etc/*.conf" --exclude="A*" rsync://myhost.com/haast/tmp/mysync/* /tmp/mysync/
receiving incremental file list
created directory /tmp/mysync
A
0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=6/7)
B
0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=5/7)
test.conf
0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=4/7)
subdir/
subdir/A2
0 100% 0.00kB/s 0:00:00 (xfer#4, to-check=2/7)
subdir/C
0 100% 0.00kB/s 0:00:00 (xfer#5, to-check=1/7)
subdir/D
0 100% 0.00kB/s 0:00:00 (xfer#6, to-check=0/7)
Number of files: 7
Number of files transferred: 6
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 105
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 174
Total bytes received: 367
sent 174 bytes received 367 bytes 360.67 bytes/sec
total size is 0 speedup is 0.00
Notice that although I excluded 'A*', it still copied them! Now running the exact same command from the command line:
/usr/bin/rsync -atv --stats --progress --port=873 --compress-level=9 --recursive --delete --exclude="/etc/*.conf" --exclude="A*" rsync://myhost.com/haast/tmp/mysync/* /tmp/mysync/
receiving incremental file list
created directory /tmp/mysync
B
0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=4/5)
test.conf
0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=3/5)
subdir/
subdir/C
0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=1/5)
subdir/D
0 100% 0.00kB/s 0:00:00 (xfer#4, to-check=0/5)
Number of files: 5
Number of files transferred: 4
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 83
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 132
Total bytes received: 273
sent 132 bytes received 273 bytes 270.00 bytes/sec
total size is 0 speedup is 0.00
Notice that now the 'A*' exclude is respected! Can someone explain why they are performing differently?
A noticed that if I removed the quotes surrounding the excludes, then the QProcess run performs correctly.
In your command-line execution, bash interpreter performs a previous substitution and remove quotes, so they are not passed to rsync arg list.
Next script shows how bash substitution is performed:
[tmp]$ cat printargs.sh
#!/bin/bash
echo $*
[tmp]$ ./printargs.sh --exclude="A*"
--exclude=A*
I have a Django view which creates 500-5000 new database INSERTS in a loop. Problem is, it is really slow! I'm getting about 100 inserts per minute on Postgres 8.3. We used to use MySQL on lesser hardware (smaller EC2 instance) and never had these types of speed issues.
Details:
Postgres 8.3 on Ubuntu Server 9.04.
Server is a "large" Amazon EC2 with database on EBS (ext3) - 11GB/20GB.
Here is some of my postgresql.conf -- let me know if you need more
shared_buffers = 4000MB
effective_cache_size = 7128MB
My python:
for k in kw:
k = k.lower()
p = ProfileKeyword(profile=self)
logging.debug(k)
p.keyword, created = Keyword.objects.get_or_create(keyword=k, defaults={'keyword':k,})
if not created and ProfileKeyword.objects.filter(profile=self, keyword=p.keyword).count():
#checking created is just a small optimization to save some database hits on new keywords
pass #duplicate entry
else:
p.save()
Some output from top:
top - 16:56:22 up 21 days, 20:55, 4 users, load average: 0.99, 1.01, 0.94
Tasks: 68 total, 1 running, 67 sleeping, 0 stopped, 0 zombie
Cpu(s): 5.8%us, 0.2%sy, 0.0%ni, 90.5%id, 0.7%wa, 0.0%hi, 0.0%si, 2.8%st
Mem: 15736360k total, 12527788k used, 3208572k free, 332188k buffers
Swap: 0k total, 0k used, 0k free, 11322048k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14767 postgres 25 0 4164m 117m 114m S 22 0.8 2:52.00 postgres
1 root 20 0 4024 700 592 S 0 0.0 0:01.09 init
2 root RT 0 0 0 0 S 0 0.0 0:11.76 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0
5 root 10 -5 0 0 0 S 0 0.0 0:00.08 events/0
6 root 11 -5 0 0 0 S 0 0.0 0:00.00 khelper
7 root 10 -5 0 0 0 S 0 0.0 0:00.00 kthread
9 root 10 -5 0 0 0 S 0 0.0 0:00.00 xenwatch
10 root 10 -5 0 0 0 S 0 0.0 0:00.00 xenbus
18 root RT -5 0 0 0 S 0 0.0 0:11.84 migration/1
19 root 34 19 0 0 0 S 0 0.0 0:00.01 ksoftirqd/1
Let me know if any other details would be helpful.
One common reason for slow bulk operations like this is each insert happening in its own transaction. If you can get all of them to happen in a single transaction, it could go much faster.
Firstly, ORM operations are always going to be slower than pure SQL. I once wrote an update to a large database in ORM code and set it running, but quit it after several hours when it had completed only a tiny fraction. After rewriting it in SQL the whole thing ran in less than a minute.
Secondly, bear in mind that your code here is doing up to four separate database operations for every row in your data set - the get in get_or_create, possibly also the create, the count on the filter, and finally the save. That's a lot of database access.
Bearing in mind that a maximum of 5000 objects is not huge, you should be able to read the whole dataset into memory at the start. Then you can do a single filter to get all the existing Keyword objects in one go, saving a huge number of queries in the Keyword get_or_create and also avoiding the need to instantiate duplicate ProfileKeywords in the first place.