Remove the following ips in a log file using unix - regex

I have the access.log file with more than 1000 X-Forwarded-For log entries like the following
142.245.59.16, 67.69.175.224, 69.31.97.126 - - [22/Sep/2015:20:00:02 -0400] "GET /company-information/cs/null?path=%
157.55.39.76, 184.27.179.176, 165.254.1.175 - - [22/Sep/2015:20:00:05 -0400] "GET /metricstream/--ID__100325--/free-co-profile.xhtml
10.70.33.32 - - [22/Sep/2015:20:00:22 -0400] "GET /autodiscover/autodiscover.xml
172.30.152.90, 198.178.234.30, 184.27.120.46, 69.31.97.126 - - [22/Sep/2015:20:03:37 -0400] "GET /company-information/cs/null?path
with this log entries, I have to grep and extract them to the access_log.txt file like the following output
142.245.59.16 - - [22/Sep/2015:20:00:02 -0400] "GET /company-information/cs/null?path=%
157.55.39.76 - - [22/Sep/2015:20:00:05 -0400] "GET /metricstream/--ID__100325--/free-co-profile.xhtml
10.70.33.32 - - [22/Sep/2015:20:00:22 -0400] "GET /autodiscover/autodiscover.xml
172.30.152.90 - - [22/Sep/2015:20:03:37 -0400] "GET /company-information/, csnull ?path
which is to leaving the first ip as it is and remove the following two or more ips, i have also tired the REGEX : /\, .*?\ -/g but i don't know how to apply it in unix sed command. please help to solve this using Unix command

You can use this sed command:
sed 's/, [^-]*- -/ - -/' file.log
142.245.59.16 - - [22/Sep/2015:20:00:02 -0400] "GET /company-information/cs/null?path=%
157.55.39.76 - - [22/Sep/2015:20:00:05 -0400] "GET /metricstream/--ID__100325--/free-co-profile.xhtml
10.70.33.32 - - [22/Sep/2015:20:00:22 -0400] "GET /autodiscover/autodiscover.xml
172.30.152.90 - - [22/Sep/2015:20:03:37 -0400] "GET /company-information/cs/null?path

That way :
sed 's/\, .* -/ -/g' ./access.log

Related

Flask Lightsail logs receiving requests every 5 seconds

I've deployed a Flask application to Lightsail via a tutorial provided on the AWS website.
Everything is working as expected in terms of my frontend communicating with my backend, but as I try to debug and access the container logs via the Lightsail console, I notice that I'm currently receiving requests every 5 seconds. The logs look as follows:
[4/May/2022:06:48:30] 172.26.7.217 - - [04/May/2022 06:48:30] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:30] 172.26.17.192 - - [04/May/2022 06:48:30] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.47.225 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.57.133 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.7.217 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:35] 172.26.17.192 - - [04/May/2022 06:48:35] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.47.225 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.57.133 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.7.217 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:40] 172.26.17.192 - - [04/May/2022 06:48:40] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.47.225 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.57.133 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.7.217 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:45] 172.26.17.192 - - [04/May/2022 06:48:45] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.47.225 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.57.133 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.7.217 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:50] 172.26.17.192 - - [04/May/2022 06:48:50] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.47.225 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.57.133 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.7.217 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:48:55] 172.26.17.192 - - [04/May/2022 06:48:55] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.47.225 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.57.133 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.7.217 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:00] 172.26.17.192 - - [04/May/2022 06:49:00] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.47.225 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.57.133 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.7.217 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
[4/May/2022:06:49:05] 172.26.17.192 - - [04/May/2022 06:49:05] "[33mGET / HTTP/1.1[0m" 404 -
There's a few things confusing here to me:
I don't have the / specifically defined in my Flask application - is this necessary? It's clear the 404s are coming because the route is not defined, but I don't have any code from my React frontend that explicitly makes a request to this route. I'm not sure if I'm supposed to just create a / route on my Flask application that more or less does nothing
I see that the requests are coming in every 5 seconds - could this be some sort of health check? I'm certainly not visiting my frontend every 5 seconds. I do have Nginx set up on a Lightsail instance that's running my frontend and I'm not sure if that might have something to do with it
Any help is appreciated, thank you!

Regex catch bad octet in IP

Hi can someone explain me why last octet of the IP if 01 or 001 is not capched by this regex ?
(\.?)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9][0-9]?)($|\.)
Debuggex Demo
as example of the code
badOctedIPv4 := "(\\.?)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9][0-9]?)($|\\.)"
ipv4Format := badOctedIPv4
matchMe := regexp.MustCompile(ipv4Format)
return matchMe.FindString(input)
the input data looks like:
10.185.248.71 - - [09/Jan/2015:19:12:06 +0000] 808840 "GET /inventoryService/inventory/purchaseItem?userId=20253471&itemId=23434300 HTTP/1.1" 500 17 "-" "Apache-HttpClient/4.2.6 (java 1.5)"
[Thu Mar 13 19:04:13 2014] [error] [client 50.0.134.125] File does not exist: /var/www/favicon.ico
192.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 10 bad
092.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 9 bad
123.234.345.001 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 8 bad
123.234.145.001 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 7 bad
345.234.123.1 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 6 bad
092.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /favicon.ico HTTP/1.1" 404 1997 www.yahoo.com "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; rv:1.7.3)..." "-" 5 bad
123.234.145.001 - - 4 bad
123.234.145.01 - - 3 bad
123.234.05.100 - - 2 bad
123.234.005.100 - - 1 bad
123.234.5.100 - - Last entry
the results returned by above code only finds all bad IP octets except the last one 001 or 01
Output of the program:
❯ go run ./findInvalidIPv4.go logfile.log
[192.168.000.254] : [.000.] : 192.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 10 bad
[092.168.000.254] : [ 092.] : 092.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 9 bad
[123.234.345.001] : [.345.] : 123.234.345.001 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 8 bad
[ 345.234.123.1] : [ 345.] : 345.234.123.1 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 6 bad
[ 092.168.72.177] : [ 092.] : 092.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /favicon.ico HTTP/1.1" 404 1997 www.yahoo.com "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; rv:1.7.3)..." "-" 5 bad
[ 123.234.05.100] : [ .05.] : 123.234.05.100 - - 2 bad
[123.234.005.100] : [.005.] : 123.234.005.100 - - 1 bad
Output explained:
first column [...] its the full bad IP where bad octet been found
second column [...] its the bad octet ... first match is enough
third column is the full line passed to above func
Can some one point me what I am missing and why the 001 at the end is not matching the pattern ?
Thanks
Your group 3 at the end:
($|\.)
Insists on either a dot or end-of-line character appearing after the last octet. That's fine for the first three octets that are guaranteed to have a . proceed it. But it won't work for the last one.
The simple fix is to just remove it or make it optional:
(\.?)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9][0-9]?)($|\.?)
Add a whitespace for group 3:
(\.?)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9][0-9]?)(\s|$|\.)
Or just remove it:
(\.?)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9][0-9]?)
All of these have issues. So maybe this is what you really want is to match any of your 3 digit sequence with either a leading dot or a trailing dot.
\.[2-9][5-9][6-9]|\.[3-9][0-9][0-9]|\.0[0-9][0-9]|\[2-9][5-9][6-9]\.|[3-9][0-9][0-9]\.|0[0-9][0-9]\.
We start to get into regular expressions being "Write once read never again" territory.
#selbie thanks again for your help seems with all suggestions here i am getting closer to solve this, this regex
(\.|^)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9]+) seems its catching for me almost all what needed
[ 192.168.2.001] : [ .001] : 192.168.2.001 - - [28/Jul/2006:10:27:10 -0300] "GET /cgi-bin/try/ HTTP/1.0" 200 3395
[192.168.000.254] : [ .000] : 192.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 10 bad
[092.168.000.254] : [ 092] : 092.168.000.254 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 9 bad
[123.234.345.001] : [ .345] : 123.234.345.001 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 8 bad
[123.234.145.001] : [ .001] : 123.234.145.001 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 7 bad
[ 345.234.123.1] : [ 345] : 345.234.123.1 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 6 bad
[ 300.234.123.1] : [ 300] : 300.234.123.1 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 6 bad
[300.300.300.300] : [ 300] : 300.300.300.300 - - [13/Sep/2006:07:01:51 -0700] "PROPFIND /svn/[xxxx]/[xxxx]/trunk HTTP/1.1" 401 587 6 bad
[ 092.168.72.177] : [ 092] : 092.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /favicon.ico HTTP/1.1" 404 1997 www.yahoo.com "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; rv:1.7.3)..." "-" 5 bad
[123.234.145.001] : [ .001] : 123.234.145.001 - - 4 bad
[ 123.234.145.01] : [ .01] : 123.234.145.01 - - 3 bad
[ 123.234.05.100] : [ .05] : 123.234.05.100 - - 2 bad
[123.234.005.100] : [ .005] : 123.234.005.100 - - 1 bad
and its skipping the good IP like 200.200.200.200 or 100.100.100.100
so we are getting closer to get that pattern working the only case now when i see is messed is when i have time string, 02:49:12 which starts the string 02 and so on as example:
[ 127.0.0.1] : [ 02] : 02:49:12 127.0.0.1 GET / 200
[ 127.0.0.1] : [ 02] : 02:49:35 127.0.0.1 GET /index.html 200
[ 127.0.0.1] : [ 03] : 03:01:06 127.0.0.1 GET /images/sponsered.gif 304
[ 127.0.0.1] : [ 03] : 03:52:36 127.0.0.1 GET /search.php 200
[ 127.0.0.1] : [ 04] : 04:17:03 127.0.0.1 GET /admin/style.css 200
[ 127.0.0.1] : [ 05] : 05:04:54 127.0.0.1 GET /favicon.ico 404
[ 127.0.0.1] : [ 05] : 05:38:07 127.0.0.1 GET /js/ads.js 200
so i am still looking for an answer what i am missing in that regular expression
================================
edit
ok this seems to do the work and its able to find the bad ip octet
(\.|^)([2-9][5-9][6-9]|[3-9][0-9][0-9]|0[0-9]+)([^:/-])
added the lat 3rd group ([^:/-]) to exclude any time format with two digits

read event trigger on image downloading from s3

I use amazon services. I have a task to track an IP address and user agent for each who download an image from s3.
I use amazon API gateway and amazon lambda and Amazon S3. Is it possible? I found triggers only on uploading or deleting the file from s3
As at now, S3 doesn't have object read event trigger. What you may do is to use cloudtrail to track the api call to read object of the s3 bucket and create an alarm to trigger a lambda function.
ex: S3 -> CloudTrail -> CloudWatch Event -> Rule -> Lamdba
Another simple solution would be to allow the object download directly via lambda.
ex: API Gateway -> Lambda -> S3
This will return the lambda output which can be the blob (be aware of the size limit) or preferably pre-signed url for the object.
You mention that your goal is "I want to track an IP address and user agent for each request of an image from s3".
To obtain this information, you should activate Amazon S3 server access logging:
Server access logging provides detailed records for the requests that are made to a bucket.
The Amazon S3 Server Access Log Format includes IP address, User Agent and other standard web log information:
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be awsexamplebucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 3E57427F3EXAMPLE REST.GET.VERSIONING - "GET /awsexamplebucket1?versioning HTTP/1.1" 200 - 113 - 7 - "-" "S3Console/0.4" - s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= SigV2 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader awsexamplebucket1.s3.us-west-1.amazonaws.com TLSV1.1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be awsexamplebucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 891CE47D2EXAMPLE REST.GET.LOGGING_STATUS - "GET /awsexamplebucket1?logging HTTP/1.1" 200 - 242 - 11 - "-" "S3Console/0.4" - 9vKBE6vMhrNiWHZmb2L0mXOcqPGzQOI5XLnCtZNPxev+Hf+7tpT6sxDwDty4LHBUOZJG96N1234= SigV2 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader awsexamplebucket1.s3.us-west-1.amazonaws.com TLSV1.1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be awsexamplebucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be A1206F460EXAMPLE REST.GET.BUCKETPOLICY - "GET /awsexamplebucket1?policy HTTP/1.1" 404 NoSuchBucketPolicy 297 - 38 - "-" "S3Console/0.4" - BNaBsXZQQDbssi6xMBdBU2sLt+Yf5kZDmeBUP35sFoKa3sLLeMC78iwEIWxs99CRUrbS4n11234= SigV2 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader awsexamplebucket1.s3.us-west-1.amazonaws.com TLSV1.1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be awsexamplebucket1 [06/Feb/2019:00:01:00 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 7B4A0FABBEXAMPLE REST.GET.VERSIONING - "GET /awsexamplebucket1?versioning HTTP/1.1" 200 - 113 - 33 - "-" "S3Console/0.4" - Ke1bUcazaN1jWuUlPJaxF64cQVpUEhoZKEG/hmy/gijN/I1DeWqDfFvnpybfEseEME/u7ME1234= SigV2 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader awsexamplebucket1.s3.us-west-1.amazonaws.com TLSV1.1

Servicing concurrent JAX-RS requests with WebLogic 12.2.1

I wrote a JAX-RS web service method to run on WebLogic 12.2.1, to test how many concurrent requests it can handle. I purposely make the method take 5 minutes to execute.
#Singleton
#Path("Services")
#ApplicationPath("resources")
public class Services extends Application {
private static int count = 0;
private static synchronized int addCount(int a) {
count = count + a;
return count;
}
#GET
#Path("Ping")
public Response ping(#Context HttpServletRequest request) {
int c = addCount(1);
logger.log(INFO, "Method entered, total running requests: [{0}]", c);
try {
Thread.sleep(300000);
} catch (InterruptedException exception) {
}
c = addCount(-1);
logger.log(INFO, "Exiting method, total running requests: [{0}]", c);
return Response.ok().build();
}
}
I also wrote a stand-alone client program to send 500 concurrent requests to this service. The client uses one thread for each request.
From what I understand, WebLogic has a default maximum of 400 threads, which means that it can handle 400 requests concurrently. This figure is confirmed with my test result below. As you can see, within the first 5 minutes, starting from 10:46:31, only 400 requests were been serviced.
23/08/2016 10:46:31.393 [132] [INFO] [Services.ping] - Method entered, total running requests: [1]
23/08/2016 10:46:31.471 [204] [INFO] [Services.ping] - Method entered, total running requests: [2]
23/08/2016 10:46:31.471 [66] [INFO] [Services.ping] - Method entered, total running requests: [3]
23/08/2016 10:46:31.471 [210] [INFO] [Services.ping] - Method entered, total running requests: [4]
23/08/2016 10:46:31.471 [206] [INFO] [Services.ping] - Method entered, total running requests: [5]
23/08/2016 10:46:31.487 [207] [INFO] [Services.ping] - Method entered, total running requests: [6]
23/08/2016 10:46:31.487 [211] [INFO] [Services.ping] - Method entered, total running requests: [7]
23/08/2016 10:46:31.487 [267] [INFO] [Services.ping] - Method entered, total running requests: [8]
23/08/2016 10:46:31.487 [131] [INFO] [Services.ping] - Method entered, total running requests: [9]
23/08/2016 10:46:31.502 [65] [INFO] [Services.ping] - Method entered, total running requests: [10]
23/08/2016 10:46:31.518 [265] [INFO] [Services.ping] - Method entered, total running requests: [11]
23/08/2016 10:46:31.565 [266] [INFO] [Services.ping] - Method entered, total running requests: [12]
23/08/2016 10:46:35.690 [215] [INFO] [Services.ping] - Method entered, total running requests: [13]
23/08/2016 10:46:35.690 [269] [INFO] [Services.ping] - Method entered, total running requests: [14]
23/08/2016 10:46:35.690 [268] [INFO] [Services.ping] - Method entered, total running requests: [15]
23/08/2016 10:46:35.690 [214] [INFO] [Services.ping] - Method entered, total running requests: [16]
23/08/2016 10:46:35.690 [80] [INFO] [Services.ping] - Method entered, total running requests: [17]
23/08/2016 10:46:35.690 [79] [INFO] [Services.ping] - Method entered, total running requests: [18]
23/08/2016 10:46:35.690 [152] [INFO] [Services.ping] - Method entered, total running requests: [19]
23/08/2016 10:46:37.674 [158] [INFO] [Services.ping] - Method entered, total running requests: [20]
23/08/2016 10:46:37.674 [155] [INFO] [Services.ping] - Method entered, total running requests: [21]
23/08/2016 10:46:39.674 [163] [INFO] [Services.ping] - Method entered, total running requests: [22]
23/08/2016 10:46:39.705 [165] [INFO] [Services.ping] - Method entered, total running requests: [23]
23/08/2016 10:46:39.705 [82] [INFO] [Services.ping] - Method entered, total running requests: [24]
23/08/2016 10:46:39.705 [166] [INFO] [Services.ping] - Method entered, total running requests: [25]
23/08/2016 10:46:41.690 [84] [INFO] [Services.ping] - Method entered, total running requests: [26]
23/08/2016 10:46:41.690 [160] [INFO] [Services.ping] - Method entered, total running requests: [27]
23/08/2016 10:46:43.690 [226] [INFO] [Services.ping] - Method entered, total running requests: [28]
23/08/2016 10:46:43.705 [162] [INFO] [Services.ping] - Method entered, total running requests: [29]
....
....
23/08/2016 10:50:52.008 [445] [INFO] [Services.ping] - Method entered, total running requests: [398]
23/08/2016 10:50:52.008 [446] [INFO] [Services.ping] - Method entered, total running requests: [399]
23/08/2016 10:50:54.008 [447] [INFO] [Services.ping] - Method entered, total running requests: [400]
23/08/2016 10:51:31.397 [132] [INFO] [Services.ping] - Exiting method, total running requests: [399]
23/08/2016 10:51:31.475 [207] [INFO] [Services.ping] - Exiting method, total running requests: [398]
23/08/2016 10:51:31.475 [207] [INFO] [Services.ping] - Method entered, total running requests: [399]
....
....
But what I don't understand is how come the first 400 requests were not serviced at the same time by the service method? As you can see from the test result, the first request was serviced at 10:46:31.393, but the 400th request was serviced at 10:50:54.008, which is more than 4 minutes later.
If we look at access.log, we can see that all 500 requests were received by WebLogic between 10:46:31 and 10:46:35. So it seems that even though WebLogic received the requests with a very short period of time, it doesn't allocate a thread and call the service method that fast.
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
....
....
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
EDITED
Added work manager to define a minimum of 400 threads.
weblogic.xml
<wls:work-manager>
<wls:name>HighPriorityWorkManager</wls:name>
<wls:fair-share-request-class>
<wls:name>HighPriority</wls:name>
<wls:fair-share>100</wls:fair-share>
</wls:fair-share-request-class>
<wls:min-threads-constraint>
<wls:name>MinThreadsCount</wls:name>
<wls:count>400</wls:count>
</wls:min-threads-constraint>
</wls:work-manager>
web.xml
<init-param>
<param-name>wl-dispatch-policy</param-name>
<param-value>HighPriorityWorkManager</param-value>
</init-param>
That's how weblogic scales threadpools (they are "self-tuning"), it does not start 400 Threads immediately. It's more a slow increase of threads to maximize throughput.
https://docs.oracle.com/cd/E24329_01/web.1211/e24432/self_tuned.htm#CNFGD113

nginx, gunicorn and django timing out

I'm so confused!
I set everything up, my site was working for two days, and then suddenly today it stops working.
The only thing I changed was yesterday I was trying to serve PHP files so I installed PHP and uwsgi. It was late and I didn't realize what I was doing. It was from this website: http://uwsgi-docs.readthedocs.org/en/latest/PHP.html
# Add ppa with libphp5-embed package
sudo add-apt-repository ppa:l-mierzwa/lucid-php5
# Update to use package from ppa
sudo apt-get update
# Install needed dependencies
sudo apt-get install php5-dev libphp5-embed libonig-dev libqdbm-dev
# Compile uWSGI PHP plugin
python uwsgiconfig --plugin plugins/php
But didn't change any settings. Even after doing that, everything was still fine. However the next day, my site just doesn't load.
I tried a few things which didn't work. In my settings:
ALLOWED_HOSTS = ['*']
In my gunicorn.sh, I set TIMEOUT=60. However, when I try to access my site (lewischi.com), nothing even happens. But when I go to http://127.0.0.1:8000, I do see workers doing stuff and get a 404 error.
Using the URLconf defined in django_project.urls,
Django tried these URL patterns, in this order:
I'm not sure what's going on! nginx-error log isn't very helpful but the access log seems more useful.
From my nginx-access.log (it works, then stops working):
50.156.86.221 - - [25/Sep/2015:00:25:43 -0700] "GET /codeWindow.html
HTTP/1.1" 200 2081 "http://lewischi.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
50.156.86.221 - - [25/Sep/2015:00:25:58 -0700] "GET /test.jpg HTTP/1.1"
404 208 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
192.168.2.6 - - [25/Sep/2015:16:42:19 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:17:24:44 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:23:28:51 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:23:29:02 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
From my supervisor log file:
supervisor: couldn't exec /home/lewischi/projects/active/django_project/gunicorn.sh: ENOEXEC
supervisor: child process was not spawned
ANY HELP would be greatly appreciated!!!! I feel like I should just uninstall uwsgi. I don't want to break anything so I'm asking for advice before I go messing things up.
I'm pretty new to this so I may be overlooking something obvious. My gunicorn debug mode output:
“Starting ”djangotut” as lewischi”
[2015-09-26 17:50:28 +0000] [2316] [DEBUG] Current configuration:
proxy_protocol: False
worker_connections: 1000
statsd_host: None
max_requests_jitter: 0
post_fork: <function post_fork at 0x7faf049ec848>
pythonpath: None
enable_stdio_inheritance: False
worker_class: sync
ssl_version: 3
suppress_ragged_eofs: True
syslog: False
syslog_facility: user
when_ready: <function when_ready at 0x7faf049ec578>
pre_fork: <function pre_fork at 0x7faf049ec6e0>
cert_reqs: 0
preload_app: False
keepalive: 2
accesslog: None
group: 1000
graceful_timeout: 30
do_handshake_on_connect: False
spew: False
workers: 3
proc_name: ”djangotut”
sendfile: True
pidfile: None
umask: 0
on_reload: <function on_reload at 0x7faf049ec410>
pre_exec: <function pre_exec at 0x7faf049ecde8>
worker_tmp_dir: None
post_worker_init: <function post_worker_init at 0x7faf049ec9b0>
limit_request_fields: 100
on_exit: <function on_exit at 0x7faf049f2500>
config: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
proxy_allow_ips: ['127.0.0.1']
pre_request: <function pre_request at 0x7faf049ecf50>
post_request: <function post_request at 0x7faf049f20c8>
user: 1000
forwarded_allow_ips: ['127.0.0.1']
worker_int: <function worker_int at 0x7faf049ecb18>
threads: 1
max_requests: 1
limit_request_line: 4094
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
certfile: None
worker_exit: <function worker_exit at 0x7faf049f2230>
chdir: /home/lewischi/projects/active/django_project
paste: None
default_proc_name: django_project.wsgi:application
errorlog: -
loglevel: DEBUG
logconfig: None
syslog_addr: udp://localhost:514
syslog_prefix: None
daemon: False
ciphers: TLSv1
on_starting: <function on_starting at 0x7faf049ec2a8>
worker_abort: <function worker_abort at 0x7faf049ecc80>
bind: ['0.0.0.0:8000']
raw_env: []
reload: False
check_config: False
limit_request_field_size: 8190
nworkers_changed: <function nworkers_changed at 0x7faf049f2398>
timeout: 60
ca_certs: None
django_settings: None
tmp_upload_dir: None
keyfile: None
backlog: 2048
logger_class: gunicorn.glogging.Logger
statsd_prefix:
[2015-09-26 17:50:28 +0000] [2316] [INFO] Starting gunicorn 19.3.0
[2015-09-26 17:50:28 +0000] [2316] [DEBUG] Arbiter booted
[2015-09-26 17:50:28 +0000] [2316] [INFO] Listening at: http://0.0.0.0:8000 (2316)
[2015-09-26 17:50:28 +0000] [2316] [INFO] Using worker: sync
[2015-09-26 17:50:28 +0000] [2327] [INFO] Booting worker with pid: 2327
[2015-09-26 17:50:28 +0000] [2328] [INFO] Booting worker with pid: 2328
[2015-09-26 17:50:28 +0000] [2329] [INFO] Booting worker with pid: 2329
[2015-09-26 17:50:29 +0000] [2316] [DEBUG] 3 workers
[2015-09-26 17:50:30 +0000] [2316] [DEBUG] 3 workers
The problem is not with supervisord itself, few things to consider when dealing with Nginx, Gunicorn and Django in general:
Make sure the user running the app process(minimum 1 user non root not including users created by default for e.g: Nginx, Postgresql. Changes with the stack) has the right permissions and ownership to achieve it's goals.
When adding another app to your stack, you should first check the port it runs on by default, and change it to prevent port conflicts, keep in mind the difference between internal and external ports since you use Nginx as a proxy to Gunicorn(this is what causes most timeouts, happened to me several times at late night work), you can use Nginx as a proxy server and create many apps with different unique internal port for each app.
With the error log you provided for supervisor, it seems you're running your gunicorn.sh either with a user that doesn't have enough permissions or ownership, or executing with a wrong command.
Please provide the supervisor config file relevant to your app.
Update: seems his ip address changed.
Ah never mind. Thanks for your time.
It turned out that my ip address somehow changed which should not have happened.... Rookie mistake.