Grep(exclude) lines that have regex matching next line - regex

I have a log file that I am trying to grep -v all the unnecessary information so I can see only useful information. I cannot seem to figure out how to exclude the date if the next line after is also a date.
What I have so far:
Fri Apr 7 01:11:01 PDT 2017
Upgrading certbot-auto 0.12.0 to 0.13.0...
Replacing certbot-auto...
Installation succeeded.
Sat Apr 8 01:11:01 PDT 2017
Sun Apr 9 01:11:01 PDT 2017
Mon Apr 10 01:11:01 PDT 2017
Tue Apr 11 01:11:01 PDT 2017
Wed Apr 12 01:11:01 PDT 2017
Thu Apr 13 01:11:01 PDT 2017
Fri Apr 14 01:11:01 PDT 2017
Sat Apr 15 01:11:01 PDT 2017
Sun Apr 16 01:11:01 PDT 2017
Mon Apr 17 01:11:01 PDT 2017
Tue Apr 18 01:11:01 PDT 2017
Wed Apr 19 01:11:01 PDT 2017
Thu Apr 20 01:11:01 PDT 2017
Fri Apr 21 01:11:01 PDT 2017
WARNING: unable to check for updates.
Sat Apr 22 01:11:01 PDT 2017
Sun Apr 23 01:11:01 PDT 2017
Mon Apr 24 01:11:01 PDT 2017
Tue Apr 25 01:11:01 PDT 2017
What I want:
Fri Apr 7 01:11:01 PDT 2017
Upgrading certbot-auto 0.12.0 to 0.13.0...
Replacing certbot-auto...
Installation succeeded.
Fri Apr 21 01:11:01 PDT 2017
WARNING: unable to check for updates.

Well I came up with this one. See if this one helps.
Regex: ([A-Za-z]{3}\s[A-Za-z]{3}\s*\d{1,2}\s(?:\d{2}:){2}\d{2}\s[A-Z]{3}\s\d{4}\n?){2,}
Regex101 Demo

SOLVED!
Got it using:
grep -v '^.*\(PDT\|PST\)\s*[0-9]\{4\}' -B 1 | grep -v '^--$'
Here is the final command:
cat certbot.log |
grep -v '^-*$' |
grep -v '^Processing ' |
grep -v '(skipped)' |
grep -v 'No renewals were' |
grep -v 'not due for renewal yet' |
grep -v 'No hooks' |
grep -v 'DeprecationWarning' |
grep -v 'not yet due for' |
grep -v '^Saving debug' |
grep -v 'Installing Python packages' |
grep -v 'Creating virtual' |
grep -v '^.*\(PDT\|PST\)\s[0-9]\{4\}' -B 1 |
grep -v '^--$' |
sed '/\(PDT\|PST\)/i\\n' |
sed 's/.*\(PDT\|PST\).*/--- & --- /'
Here is the final result:
--- Fri Mar 3 01:11:01 PST 2017 ---
Upgrading certbot-auto 0.11.1 to 0.12.0...
Replacing certbot-auto...
Installation succeeded.
--- Thu Mar 23 01:11:01 PDT 2017 ---
WARNING: unable to check for updates.
--- Wed Mar 29 01:11:01 PDT 2017 ---
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for {mydomain}.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0001_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0001_csr-certbot.pem
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/{mydomain}.com/fullchain.pem
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/{mydomain}.com/fullchain.pem (success)
--- Thu Apr 6 01:11:01 PDT 2017 ---
WARNING: unable to check for updates.
--- Fri Apr 7 01:11:01 PDT 2017 ---
Upgrading certbot-auto 0.12.0 to 0.13.0...
Replacing certbot-auto...
Installation succeeded.
--- Fri Apr 21 01:11:01 PDT 2017 ---
WARNING: unable to check for updates.

If you perl installed, try running this on the shell:
perl -0777 -ne 'while(m/^(.*?succeeded\..*?\d{4}.)|WARNING:.*?\./simg){print "$&";}' your_file
Output:
Fri Apr 7 01:11:01 PDT 2017
Upgrading certbot-auto 0.12.0 to 0.13.0...
Replacing certbot-auto...
Installation succeeded.
WARNING: unable to check for updates.
Regex Demo

Related

Django1.11 + Python3.6 + Nginx 1.12 + uWSGI 2.0 deployment with error ImportError: No module named XXX...unable to load app 0 (mountpoint='')

My deploy enviroment is "Django 1.11.13 + Python3.6.5(with virtualenv) + uWSGI 2.0 + Nginx 1.12". Here is my project:
(ncms) [ncms#localhost ncms]$ pwd
/home/ncms/ncms
(ncms) [ncms#localhost ncms]$ ll
总用量 36
drwxrwxr-x 12 ncms ncms 157 5月 17 13:48 apps
-rwxrwxr-x 1 ncms ncms 16384 5月 14 18:49 celerybeat-schedule
drwxrwxr-x 2 ncms ncms 66 5月 17 10:40 db_tools
drwxrwxr-x 4 ncms ncms 41 5月 17 10:40 extra_apps
drwxrwxr-x 5 ncms ncms 233 5月 17 10:40 libs
drwxrwxr-x 2 ncms ncms 152 5月 17 10:40 logfiles
-rwxrwxr-x 1 ncms ncms 855 5月 6 22:23 manage.py
drwxrwxr-x 3 ncms ncms 201 5月 21 14:19 ncms
-rwxrwxr-x 1 ncms ncms 351 5月 15 18:25 ncms.conf
-rwxrwxr-x 1 ncms ncms 2766 5月 17 13:43 notes.md
-rwxrwxr-x 1 ncms ncms 518 5月 14 15:52 requirements.txt
drwxrwxr-x 3 ncms ncms 23 5月 18 16:09 static
drwxrwxr-x 10 ncms ncms 120 5月 18 16:05 static_files
drwxrwxr-x 11 ncms ncms 4096 5月 17 15:38 templates
my virtualenv path and name:
(ncms) [ncms#localhost ncms]$ pwd
/home/ncms/.virtualenvs/ncms
(ncms) [ncms#localhost ncms]$ ll
总用量 8
drwxrwxr-x 3 ncms ncms 4096 5月 21 13:46 bin
drwxrwxr-x 2 ncms ncms 24 5月 15 11:49 include
drwxrwxr-x 3 ncms ncms 23 5月 15 11:49 lib
-rw-rw-r-- 1 ncms ncms 61 5月 15 11:50 pip-selfcheck.json
Three important files you must know:
1./etc/uwsgi/ncms.ini
[uwsgi]
# Django diretory that contains manage.py
chdir = /home/ncms/ncms
module = ncms.wsgi:application
env = DJANGO_SETTINGS_MODULE=ncms.settings
# enable master process manager
master = true
# bind to UNIX socket
socket = /run/uwsgi/ncms.sock
# number of worker processes
processes = 4
# user identifier of uWSGI processes
uid = ncms
# group identifier of uWSGI processes
gid = ncms
#respawn processes after serving 5000 requests
max-requests = 5000
# clear environment on exit
vacuum = true
# the virtualenv you are using (full path)
home = /home/ncms/.virtualenvs/ncms
# set mode and own of created UNIX socket
chown-socket = ncms:nginx
chmod-socket = 660
# place timestamps into log
log-date = true
logto = /var/log/uwsgi.log
no-site = true
2./etc/systemd/system/uwsgi.service
[Unit]
Description=ncms uWSGI service
[Service]
ExecStartPre=/usr/bin/bash -c 'mkdir -p /run/uwsgi; chown ncms:nginx /run/uwsgi'
ExecStart=/usr/bin/uwsgi --emperor /etc/uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=graphical.target
3./etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location = favicon.ico { access_log off; log_not_found off; }
location /static {
root /home/ncms/ncms;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/ncms.sock;
}
}
}
After they were configured well:
sudo nginx –t
sudo usermod -a -G ncms nginx
chmod 710 /home/ncms
sudo systemctl daemon-reload
sudo systemctl restart nginx
sudo systemctl restart uwsgi
Then I always got this error at the Operational MODE: preforking step when I looked /var/log/uwsgi.log:
Mon May 21 16:38:35 2018 - SIGINT/SIGQUIT received...killing workers...
Mon May 21 16:38:35 2018 - received message 0 from emperor
Mon May 21 16:38:36 2018 - worker 1 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 2 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 3 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 4 buried after 1 seconds
Mon May 21 16:38:36 2018 - goodbye to uWSGI.
Mon May 21 16:38:36 2018 - VACUUM: unix socket /run/uwsgi/ncms.sock removed.
Mon May 21 16:38:38 2018 - *** Starting uWSGI 2.0.17 (64bit) on [Mon May 21 16:38:38 2018] ***
Mon May 21 16:38:38 2018 - compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-16) on 26 April 2018 05:37:29
Mon May 21 16:38:38 2018 - os: Linux-3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018
Mon May 21 16:38:38 2018 - nodename: localhost.localdomain
Mon May 21 16:38:38 2018 - machine: x86_64
Mon May 21 16:38:38 2018 - clock source: unix
Mon May 21 16:38:38 2018 - pcre jit disabled
Mon May 21 16:38:38 2018 - detected number of CPU cores: 4
Mon May 21 16:38:38 2018 - current working directory: /etc/uwsgi
Mon May 21 16:38:38 2018 - detected binary path: /usr/bin/uwsgi
Mon May 21 16:38:38 2018 - chdir() to /home/ncms/ncms
Mon May 21 16:38:38 2018 - your processes number limit is 7164
Mon May 21 16:38:38 2018 - your memory page size is 4096 bytes
Mon May 21 16:38:38 2018 - detected max file descriptor number: 1024
Mon May 21 16:38:38 2018 - lock engine: pthread robust mutexes
Mon May 21 16:38:38 2018 - thunder lock: disabled (you can enable it with --thunder-lock)
Mon May 21 16:38:38 2018 - uwsgi socket 0 bound to UNIX address /run/uwsgi/ncms.sock fd 3
Mon May 21 16:38:38 2018 - setgid() to 2014
Mon May 21 16:38:38 2018 - setuid() to 2030
Mon May 21 16:38:38 2018 - Python version: 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Mon May 21 16:38:38 2018 - Set PythonHome to /home/ncms/.virtualenvs/ncms
Mon May 21 16:38:38 2018 - *** Python threads support is disabled. You can enable it with --enable-threads ***
Mon May 21 16:38:38 2018 - Python main interpreter initialized at 0x1cea860
Mon May 21 16:38:38 2018 - your server socket listen backlog is limited to 100 connections
Mon May 21 16:38:38 2018 - your mercy for graceful operations on workers is 60 seconds
Mon May 21 16:38:38 2018 - mapped 364600 bytes (356 KB) for 4 cores
Mon May 21 16:38:38 2018 - *** Operational MODE: preforking ***
Traceback (most recent call last):
File "./ncms/__init__.py", line 1, in <module>
from __future__ import absolute_import, unicode_literals
ImportError: No module named __future__
Mon May 21 16:38:38 2018 - unable to load app 0 (mountpoint='') (callable not found or import error)
Mon May 21 16:38:38 2018 - *** no app loaded. going in full dynamic mode ***
Mon May 21 16:38:38 2018 - *** uWSGI is running in multiple interpreter mode ***
Mon May 21 16:38:38 2018 - spawned uWSGI master process (pid: 3456)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 1 (pid: 3458, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 2 (pid: 3459, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 3 (pid: 3460, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 4 (pid: 3462, cores: 1)
When I removed the line "from future import absolute_import, unicode_literals" in my code, it rasied the same error, like:
Mon May 21 16:43:30 2018 - *** Operational MODE: preforking ***
Traceback (most recent call last):
File "./ncms/__init__.py", line 4, in <module>
from .celery import app as celery_app
File "./ncms/celery.py", line 6, in <module>
import os
ImportError: No module named os
Mon May 21 16:43:30 2018 - unable to load app 0 (mountpoint='') (callable not found or import error)
Mon May 21 16:43:30 2018 - *** no app loaded. going in full dynamic mode ***
looks like can't import anything...
When I access my website, the /var/log/uwsgi.log displayed:
Mon May 21 16:55:28 2018 - --- no python application found, check your startup logs for errors ---
[pid: 3812|app: -1|req: -1/3] 192.168.10.1 () {46 vars in 854 bytes} [Mon May 21 16:55:28 2018] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
Mon May 21 16:55:28 2018 - --- no python application found, check your startup logs for errors ---
[pid: 3812|app: -1|req: -1/4] 192.168.10.1 () {48 vars in 855 bytes} [Mon May 21 16:55:28 2018] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (1 switches on core 0)
I've google a lot and tried many time changing here and there, just don't get the right answer.
Anyone could help me? Please!
You can first using uwsgi command line to start app as root
uwsgi --socket 127.0.0.1:8080 --chdir /home/ncms/ncms/ --wsgi-file ncms/wsgi.py
then if ok, debug the config file mode

Sphinx installation centos7

I just updated sphinx to the latest version on a dedicated server running with centos 7, but after hours of search I can't find the problem.
The sphinx index has created well, but I can't start search daemon. I got this messages all the time :
systemctl status searchd.service
searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: failed (Result: timeout) since Sat 2018-03-24 21:14:09 CET; 3min 4s ago
Process: 17865 ExecStartPre=/bin/chown sphinx.sphinx /var/run/sphinx (code=exited, status=0/SUCCESS)
Process: 17863 ExecStartPre=/bin/mkdir -p /var/run/sphinx (code=killed, signal=TERM)
Mar 24 21:14:09 systemd[1]: Starting SphinxSearch Search Engine...
Mar 24 21:14:09 systemd[1]: searchd.service start-pre operation timed out. Terminating.
Mar 24 21:14:09 systemd[1]: Failed to start SphinxSearch Search Engine.
Mar 24 21:14:09 systemd[1]: Unit searchd.service entered failed state.
Mar 24 21:14:09 systemd[1]: searchd.service failed.
I have really no idea where this problem comes from.
In your systemd service file (mine is in /usr/lib/systemd/system/searchd.service) comment out:
/bin/chown sphinx.sphinx /var/run/sphinx
/bin/mkdir -p /var/run/sphinx manually
(you can run these commands manually if it's not done yet).
Then change from
Type=forking
to
Type=simple
Then do systemctl daemon-reload and you can start/stop/status the service:
[root#server ~]# cat /usr/lib/systemd/system/searchd.service
[Unit]
Description=SphinxSearch Search Engine
After=network.target remote-fs.target nss-lookup.target
After=syslog.target
[Service]
Type=simple
User=sphinx
Group=sphinx
# Run ExecStartPre with root-permissions
PermissionsStartOnly=true
#ExecStartPre=/bin/mkdir -p /var/run/sphinx
#ExecStartPre=/bin/chown sphinx.sphinx /var/run/sphinx
# Run ExecStart with User=sphinx / Group=sphinx
ExecStart=/usr/bin/searchd --config /etc/sphinx/sphinx.conf
ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait
KillMode=process
KillSignal=SIGTERM
SendSIGKILL=no
LimitNOFILE=infinity
TimeoutStartSec=infinity
PIDFile=/var/run/sphinx/searchd.pid
[Install]
WantedBy=multi-user.target
Alias=sphinx.service
Alias=sphinxsearch.service
[root#server ~]# systemctl start searchd
[root#server ~]# systemctl status searchd
● searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2018-03-25 10:41:24 EDT; 4s ago
Process: 111091 ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait (code=exited, status=1/FAILURE)
Main PID: 112030 (searchd)
CGroup: /system.slice/searchd.service
├─112029 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
└─112030 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
Mar 25 10:41:24 server.domain.com searchd[112026]: Sphinx 2.3.2-id64-beta (4409612)
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2001-2016, Andrew Aksyonoff
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Mar 25 10:41:24 server.domain.com searchd[112026]: Sphinx 2.3.2-id64-beta (4409612)
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2001-2016, Andrew Aksyonoff
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'test1'
Mar 25 10:41:24 server.domain.com searchd[112026]: WARNING: index 'test1': prealloc: failed to open /var/lib/sphinx/test1.sph: No such file or directory...T SERVING
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'testrt'
Mar 25 10:41:24 server.domain.com systemd[1]: searchd.service: Supervising process 112030 which is not our child. We'll most likely not notice when it exits.
Hint: Some lines were ellipsized, use -l to show in full.
[root#server ~]# systemctl stop searchd
[root#server ~]# systemctl status searchd
● searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2018-03-25 10:41:36 EDT; 1s ago
Process: 112468 ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait (code=exited, status=1/FAILURE)
Main PID: 112030
Mar 25 10:41:24 server.domain.com searchd[112026]: WARNING: index 'test1': prealloc: failed to open /var/lib/sphinx/test1.sph: No such file or directory...T SERVING
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'testrt'
Mar 25 10:41:24 server.domain.com systemd[1]: searchd.service: Supervising process 112030 which is not our child. We'll most likely not notice when it exits.
Mar 25 10:41:33 server.domain.com systemd[1]: Stopping SphinxSearch Search Engine...
Mar 25 10:41:33 server.domain.com searchd[112468]: [Sun Mar 25 10:41:33.183 2018] [112468] using config file '/etc/sphinx/sphinx.conf'...
Mar 25 10:41:33 server.domain.com searchd[112468]: [Sun Mar 25 10:41:33.183 2018] [112468] stop: successfully sent SIGTERM to pid 112030
Mar 25 10:41:36 server.domain.com systemd[1]: searchd.service: control process exited, code=exited status=1
Mar 25 10:41:36 server.domain.com systemd[1]: Stopped SphinxSearch Search Engine.
Mar 25 10:41:36 server.domain.com systemd[1]: Unit searchd.service entered failed state.
Mar 25 10:41:36 server.domain.com systemd[1]: searchd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
I had the same problem and finally found the solution that worked for me.
I have edited my "/etc/systemd/system/sphinx.service" to look like
[Unit]
Description=SphinxSearch Search Engine
After=network.target remote-fs.target nss-lookup.target
After=syslog.target
[Service]
User=sphinx
Group=sphinx
RuntimeDirectory=sphinxsearch
RuntimeDirectoryMode=0775
# Run ExecStart with User=sphinx / Group=sphinx
ExecStart=/usr/bin/searchd --config /etc/sphinx/sphinx.conf
ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait
KillMode=process
KillSignal=SIGTERM
SendSIGKILL=no
LimitNOFILE=infinity
TimeoutStartSec=infinity
#PIDFile=/var/run/sphinx/searchd.pid
PIDFile=/var/run/sphinxsearch/searchd.pid
[Install]
WantedBy=multi-user.target
Alias=sphinx.service
Alias=sphinxsearch.service
In that case my searchd is able to survive the reboot. The solution from previous post have the problem with searchd starting after reboot before the /var/run/sphinxsearch dir was deleting after reboot in my case.
The fact is that RHEL (CentOS) 7 does not perceive the "Infinity" value of the "TimeoutStartSec"parameter. You must set a numeric value. For Example, TimeoutStartSec=600

AWS API Gateway Cannot GET / when function sleeped for long time

My current stack is AWS API Gateway --> AWS Lambda --> swagger-node + swagger-express-mw + aws-serverless-express.
So my Swagger API is hosted as one node.js Lambda Function and Invoked with aws_proxy from API Gateway. This works quite good. The only thing is that when the function sleeped for too long (cold start?) I get a Cannot GET / as Output from every URL I am calling first. From 2nd Request on, it runs very fast. Any ideas on that?
I don't think that it comes from API Gateway Integration Timeout as that are 30 seconds. The slowest invocation time of the function directly via lambda is around 2,5s and when it is called more often it is normally not more than 150ms. I also increased the Time of Lambda Timeout for that function to 10s so from there should also not come an error.
Logs from Test Request via API Gateway first Invocation
Response Body
Cannot GET /hello
Response Headers
{
"x-powered-by": "Express",
"x-content-type-options": "nosniff",
"content-type": "text/html; charset=utf-8",
"content-length": "18",
"date": "Sun, 19 Feb 2017 15:00:11 GMT",
"connection": "close",
"X-Amzn-Trace-Id": "<TRACE-ID>"
}
Logs
Execution log for request test-request
Sun Feb 19 15:00:07 UTC 2017 : Starting execution for request: test-invoke-request
Sun Feb 19 15:00:07 UTC 2017 : HTTP Method: GET, Resource Path: /hello
Sun Feb 19 15:00:07 UTC 2017 : Method request path: {}
Sun Feb 19 15:00:07 UTC 2017 : Method request query string: {}
Sun Feb 19 15:00:07 UTC 2017 : Method request headers: {}
Sun Feb 19 15:00:07 UTC 2017 : Method request body before transformations:
Sun Feb 19 15:00:07 UTC 2017 : Endpoint request URI: https://lambda.eu-central-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-central-1:<ACCOUNT-ID>:function:api/invocations
Sun Feb 19 15:00:07 UTC 2017 : Endpoint request headers: {x-amzn-lambda-integration-tag=test-request, Authorization=**************************************************************************************************************************************************************************************************************************************************************************************************************************4b0637, X-Amz-Date=20170219T150007Z, x-amzn-apigateway-api-id=965h04axki, Accept=application/json, User-Agent=AmazonAPIGateway_965h04axki, X-Amz-Security-Token=<SECURITY-TOKEN>
Sun Feb 19 15:00:07 UTC 2017 : Endpoint request body after transformations: {"resource":"/hello","path":"/hello","httpMethod":"GET","headers":null,"queryStringParameters":null,"pathParameters":null,"stageVariables":null,"requestContext":{"accountId":"<ACCOUNT-ID>","resourceId":"ll6gw8","stage":"test-invoke-stage","requestId":"test-invoke-request","identity":{"cognitoIdentityPoolId":null,"accountId":"<ACCOUNT-ID>","cognitoIdentityId":null,"caller":"<ACCOUNT-ID>","apiKey":"test-invoke-api-key","sourceIp":"test-invoke-source-ip","accessKey":"<ACCESS-ID>","cognitoAuthenticationType":null,"cognitoAuthenticationProvider":null,"userArn":"arn:aws:iam::<ACCOUNT-ID>:root","userAgent":"Apache-HttpClient/4.5.x (Java/1.8.0_102)","user":"<ACCOUNT-ID>"},"resourcePath":"/hello","httpMethod":"GET","apiId":"965h04axki"},"body":null,"isBase64Encoded":false}
Sun Feb 19 15:00:11 UTC 2017 : Endpoint response body before transformations: {"statusCode":404,"body":"Cannot GET /hello\n","headers":{"x-powered-by":"Express","x-content-type-options":"nosniff","content-type":"text/html; charset=utf-8","content-length":"18","date":"Sun, 19 Feb 2017 15:00:11 GMT","connection":"close"},"isBase64Encoded":false}
Sun Feb 19 15:00:11 UTC 2017 : Endpoint response headers: {x-amzn-Remapped-Content-Length=0, x-amzn-RequestId=19f8554e-f6b4-11e6-8184-d3ccf0ccf643, Connection=keep-alive, Content-Length=267, Date=Sun, 19 Feb 2017 15:00:11 GMT, Content-Type=application/json}
Sun Feb 19 15:00:11 UTC 2017 : Method response body after transformations: Cannot GET /hello
Sun Feb 19 15:00:11 UTC 2017 : Method response headers: {x-powered-by=Express, x-content-type-options=nosniff, content-type=text/html; charset=utf-8, content-length=18, date=Sun, 19 Feb 2017 15:00:11 GMT, connection=close, X-Amzn-Trace-Id=Root=1-58a9b2f7-91fc7371e41d6ae9c2fbf64d}
Sun Feb 19 15:00:11 UTC 2017 : Successfully completed execution
Sun Feb 19 15:00:11 UTC 2017 : Method completed with status: 404
Logs from Test Request via API Gateway second Invocation
Response Body
"Hello, stranger!"
Response Headers
{
"x-powered-by": "Express",
"access-control-allow-origin": "*",
"content-type": "application/json; charset=utf-8",
"content-length": "18",
"etag": "W/\"12-E1p7iNXxJ4trMdmFBhlU9Q\"",
"date": "Mon, 13 Feb 2017 20:12:36 GMT",
"connection": "close",
"X-Amzn-Trace-Id": "<Trace-ID>"
}
Logs
Execution log for request test-request
Mon Feb 13 20:12:36 UTC 2017 : Starting execution for request: test-invoke-request
Mon Feb 13 20:12:36 UTC 2017 : HTTP Method: GET, Resource Path: /hello
Mon Feb 13 20:12:36 UTC 2017 : Method request path: {}
Mon Feb 13 20:12:36 UTC 2017 : Method request query string: {}
Mon Feb 13 20:12:36 UTC 2017 : Method request headers: {}
Mon Feb 13 20:12:36 UTC 2017 : Method request body before transformations:
Mon Feb 13 20:12:36 UTC 2017 : Endpoint request URI: https://lambda.eu-central-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-central-1:<LAMBDA-FUNCTION-ID>:function:api/invocations
Mon Feb 13 20:12:36 UTC 2017 : Endpoint request headers: {x-amzn-lambda-integration-tag=test-request, Authorization=*******************************************************************************************************************************************************************************************************************************************************************************************************************************************3e1b18, X-Amz-Date=20170213T201236Z, x-amzn-apigateway-api-id=965h04axki, X-Amz-Source-Arn=arn:aws:execute-api:eu-central-1:<ACCOUNT-ID>:965h04axki/null/GET/hello, Accept=application/json, User-Agent=AmazonAPIGateway_965h04axki, X-Amz-Security-Token=<TOKEN>
Mon Feb 13 20:12:36 UTC 2017 : Endpoint request body after transformations: {"resource":"/hello","path":"/hello","httpMethod":"GET","headers":null,"queryStringParameters":null,"pathParameters":null,"stageVariables":null,"requestContext":{"accountId":"<ACCOUNT-ID>","resourceId":"ll6gw8","stage":"test-invoke-stage","requestId":"test-invoke-request","identity":{"cognitoIdentityPoolId":null,"accountId":"<ACCOUNT-ID>","cognitoIdentityId":null,"caller":"427402682812","apiKey":"test-invoke-api-key","sourceIp":"test-invoke-source-ip","accessKey":"<ACCESS-KEY>","cognitoAuthenticationType":null,"cognitoAuthenticationProvider":null,"userArn":"arn:aws:iam::<ACCOUNT-ID>:root","userAgent":"Apache-HttpClient/4.5.x (Java/1.8.0_102)","user":"<ACCOUNT-ID>"},"resourcePath":"/hello","httpMethod":"GET","apiId":"965h04axki"},"body":null,"isBase64Encoded":false}
Mon Feb 13 20:12:36 UTC 2017 : Endpoint response body before transformations: {"statusCode":200,"body":"\"Hello, stranger!\"","headers":{"x-powered-by":"Express","access-control-allow-origin":"*","content-type":"application/json; charset=utf-8","content-length":"18","etag":"W/\"12-E1p7iNXxJ4trMdmFBhlU9Q\"","date":"Mon, 13 Feb 2017 20:12:36 GMT","connection":"close"},"isBase64Encoded":false}
Mon Feb 13 20:12:36 UTC 2017 : Endpoint response headers: {x-amzn-Remapped-Content-Length=0, x-amzn-RequestId=c3354327-f228-11e6-8c1d-ed11cc413770, Connection=keep-alive, Content-Length=315, Date=Mon, 13 Feb 2017 20:12:36 GMT, Content-Type=application/json}
Mon Feb 13 20:12:36 UTC 2017 : Method response body after transformations: "Hello, stranger!"
Mon Feb 13 20:12:36 UTC 2017 : Method response headers: {x-powered-by=Express, access-control-allow-origin=*, content-type=application/json; charset=utf-8, content-length=18, etag=W/"12-E1p7iNXxJ4trMdmFBhlU9Q", date=Mon, 13 Feb 2017 20:12:36 GMT, connection=close, X-Amzn-Trace-Id=Root=1-58a21334-8ea6c4b5944eebb873bc7d2e}
Mon Feb 13 20:12:36 UTC 2017 : Successfully completed execution
Mon Feb 13 20:12:36 UTC 2017 : Method completed with status: 200
I think the response "Cannot GET /" is coming from your Lambda function itself. Can you check API Gateway CW logs (or Test Invoke feature in console) to see what's different in the integration request and response in the first call?
I didn't see any real documentation about it (just this Medium post) but I also experienced the fact that a Lambda can be frozen until the first invocation, or in the case it's not called for a long time.
A solution is to schedule a regular invocation to wake up your lambda, with Amazon CloudWatch Events
I know that is an old question, but if you use TypeORM (or more in general, if you wrap all your Express middlewares within a .then() callback of a Promise), and you use context.callbackWaitsForEmptyEventLoop = false in your lambda handler, maybe this could help you: https://github.com/typeorm/typeorm/issues/5894
Long story short: avoid to set that flag to false, if possible, otherwise avoid to wrap the Express middlewares within the .then() callback and, for instance, initialize your db connection in the first Express middleware.

nginx / uwsgi with django 1.9.8 - Bad gateway 502 error with upstream prematurely closed connection while reading response header from upstream?

my nginx_app.conf -
server
{
listen 8000;
#server_name
access_log /var/log/tw-access.log;
error_log /var/log/tw-error.log;
root /var/www/djangoapp;
access_log on;
error_log on;
location /static/
{
alias /var/www/djangoapp/apptw/static/;
}
location /
{
uwsgi_pass 127.0.0.1:8800;
include uwsgi_params;
uwsgi_read_timeout 500;
}
}
and my uwsgi_app.ini -
# djangoapp_uwsgi.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /var/www/djangoapp
# Django's wsgi file
module = djangoapp.wsgi:application
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 5
# the socket (use the full path to be safe
socket = 127.0.0.1:8800
# ... with appropriate permissions - may be needed
chmod-socket = 664
# clear environment on exit
vacuum = true
max-requests = 5000
uid = www-data
gid = www-data
enable-threads = true
buffer-size = 65535
when I am opening this with server-IP:8000 it is showing 502-bad Gateway and nginx error-file showing :-
upstream prematurely closed connection while reading response header from upstream,
client: 223.181.31.8, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8800", host: "server-IP:8000"
uwsgi logs :-
uwsgi logs :-
Sun Sep 18 01:35:19 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:38:49 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:38:49 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:47:03 2016 - SIGINT/SIGQUIT received...killing workers...
Sun Sep 18 13:47:04 2016 - worker 1 buried after 1 seconds
Sun Sep 18 13:47:04 2016 - worker 2 buried after 1 seconds
Sun Sep 18 13:47:04 2016 - worker 3 buried after 1 seconds
Sun Sep 18 13:47:04 2016 - worker 4 buried after 1 seconds
Sun Sep 18 13:47:04 2016 - worker 5 buried after 1 seconds
Sun Sep 18 13:47:04 2016 - goodbye to uWSGI.
Sun Sep 18 13:47:04 2016 - VACUUM: unix socket /run/uwsgi/app/tw/socket removed.
Sun Sep 18 13:47:05 2016 - *** Starting uWSGI 2.0.12-debian (64bit) on [Sun Sep 18 13:47:04 2016] ***
Sun Sep 18 13:47:05 2016 - compiled with version: 5.3.1 20160412 on 13 April 2016 08:36:06
Sun Sep 18 13:47:05 2016 - os: Linux-2.6.32-042stab113.17 #1 SMP Wed Feb 10 18:31:00 MSK 2016
Sun Sep 18 13:47:05 2016 - nodename: cot-stg-1
Sun Sep 18 13:47:05 2016 - machine: x86_64
Sun Sep 18 13:47:05 2016 - clock source: unix
Sun Sep 18 13:47:05 2016 - pcre jit disabled
Sun Sep 18 13:47:05 2016 - detected number of CPU cores: 4
Sun Sep 18 13:47:05 2016 - current working directory: /
Sun Sep 18 13:47:05 2016 - writing pidfile to /run/uwsgi/app/tw/pid
Sun Sep 18 13:47:05 2016 - detected binary path: /usr/bin/uwsgi-core
Sun Sep 18 13:47:05 2016 - setgid() to 33
Sun Sep 18 13:47:05 2016 - setuid() to 33
Sun Sep 18 13:47:05 2016 - chdir() to /var/www/django_app
Sun Sep 18 13:47:05 2016 - your processes number limit is 514923
Sun Sep 18 13:47:05 2016 - your memory page size is 4096 bytes
Sun Sep 18 13:47:05 2016 - detected max file descriptor number: 1024
Sun Sep 18 13:47:05 2016 - lock engine: pthread robust mutexes
Sun Sep 18 13:47:05 2016 - thunder lock: disabled (you can enable it with --thunder-lock)
Sun Sep 18 13:47:05 2016 - uwsgi socket 0 bound to UNIX address /run/uwsgi/app/tw/socket fd 3
Sun Sep 18 13:47:05 2016 - uwsgi socket 1 bound to TCP address 127.0.0.1:8800 fd 5
Sun Sep 18 13:47:05 2016 - your server socket listen backlog is limited to 100 connections
Sun Sep 18 13:47:05 2016 - your mercy for graceful operations on workers is 60 seconds
Sun Sep 18 13:47:05 2016 - mapped 805242 bytes (786 KB) for 5 cores
Sun Sep 18 13:47:05 2016 - *** Operational MODE: preforking ***
Sun Sep 18 13:47:05 2016 - *** no app loaded. going in full dynamic mode ***
Sun Sep 18 13:47:05 2016 - *** uWSGI is running in multiple interpreter mode ***
Sun Sep 18 13:47:05 2016 - !!!!!!!!!!!!!! WARNING !!!!!!!!!!!!!!
Sun Sep 18 13:47:05 2016 - no request plugin is loaded, you will not be able to manage requests.
Sun Sep 18 13:47:05 2016 - you may need to install the package for your language of choice, or simply load it with --plugin.
Sun Sep 18 13:47:05 2016 - !!!!!!!!!!! END OF WARNING !!!!!!!!!!
Sun Sep 18 13:47:05 2016 - spawned uWSGI master process (pid: 8733)
Sun Sep 18 13:47:05 2016 - spawned uWSGI worker 1 (pid: 8735, cores: 1)
Sun Sep 18 13:47:05 2016 - spawned uWSGI worker 2 (pid: 8736, cores: 1)
Sun Sep 18 13:47:05 2016 - spawned uWSGI worker 3 (pid: 8737, cores: 1)
Sun Sep 18 13:47:05 2016 - spawned uWSGI worker 4 (pid: 8738, cores: 1)
Sun Sep 18 13:47:05 2016 - spawned uWSGI worker 5 (pid: 8739, cores: 1)
Sun Sep 18 13:47:27 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:55:41 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:55:43 2016 - -- unavailable modifier requested: 0 --
Sun Sep 18 13:57:03 2016 - -- unavailable modifier requested: 0 --
What should I do? please suggest.
Thank you in advance...

WSO2 third party dependencies (e.g. axiom\1.2.11-wso2v4) where to find the wso2 change log?

It seems that WSO2 wrap some of their third party dependencies so they can maintain their own version of the third party dependency with the WSO2 specific changes.
For example in %CARBON_HOME% for Carbon 4.1.0, you can find a modified version of axiom: %CARBON_HOME%/dependencies/axiom/1.2.11-wso2v4
Question: Where can the changelog be found for the four WSO2 changes that have been made to the axiom code base?
EDIT
I tried svn log, but no useful information was given:
/cygdrive/c/Dev/wso2carbon_4.1.0/dependencies/axiom/1.2.11-wso2v4>
$ svn log
------------------------------------------------------------------------
r168614 | supunm#wso2.com | 2013-03-20 13:40:12 +0000 (Wed, 20 Mar 2013) | 1 line
committing kernel 4.1.0 tag
------------------------------------------------------------------------
AFAIK , there is no Change-log files :(..But if you take svn checkout, from SVN logs we could identify the changes..
like;
C:\Projects\kernel\trunk\dependencies\axiom>svn log
------------------------------------------------------------------------
r170207 | kishanthan#wso2.com | 2013-04-11 23:55:55 +0530 (Thu, 11 Apr 2013) | 1 line
reverting a faulty commit, as per 167621, to fix test failures in axiom test suite
------------------------------------------------------------------------
r167220 | kishanthan#wso2.com | 2013-03-08 19:56:47 +0530 (Fri, 08 Mar 2013) | 1 line
upgrading HTTPCore 4.2.3 - CARBON-14072, patch from Shafreen
------------------------------------------------------------------------
r161637 | supunm | 2013-02-10 18:34:58 +0530 (Sun, 10 Feb 2013) | 1 line
build fix
------------------------------------------------------------------------
r161630 | supunm | 2013-02-10 17:53:08 +0530 (Sun, 10 Feb 2013) | 1 line
version update
------------------------------------------------------------------------
r161629 | supunm | 2013-02-10 17:48:49 +0530 (Sun, 10 Feb 2013) | 2 lines
moving axiom v3 to v4, v3 is already released!