On my Lua conf, I get an error for each metrics
prometheus.lua:710: log_error(): No value passed for upstream_time while logging request
I check if the value is nil or empty, but it seems is not enough to resolve.
init_worker_by_lua_block {
...
metric_upstream_time = prometheus:histogram( "upstream_time", "Upstream", {"host"} )
...
}
log_by_lua_block {
...
if (not (ngx.var.upstream_response_time == nil or ngx.var.upstream_response_time == '' ))
then
metric_upstream_time:observe(tonumber(ngx.var.upstream_response_time), {ngx.var.server_name})
end
...
}
nginx_error.log
Jan 25 13:56:41 host nginx: 2022/01/25 13:56:41 [error] 142#142: *106669580 [lua] prometheus.lua:734: log_error(): No value passed for upstream_time while logging request, client: xx.xx.201.231, server: xxxxxx, request: "GET /api/v1/xxxxxx HTTP/1.1", upstream: "http://127.0.0.1:20190/api/v1/images/xxxx", host: "xxxxxx", referrer: "xxxxxxx"
Related
I'm trying to set up an AWS API Gateway for something which was previously handled by an nginx reverse proxy. My endpoints are EC2 instances inside a VPC. I've already set it up so the gateway can access these instances.
The previous nginx setup looked like this:
http {
server {
listen 80;
location /host1/ {
proxy_pass http://host1:8000/;
}
location /host2/ {
proxy_pass http://host2:8070/;
}
...
}
}
The Problem arises when I try to rewrite the request path. I've set up a test route in the Gateway: ANY /test/{proxy+}, which I passed to the corresponding EC2 instance. I've verified, that requests pass through, but they contain the complete paths of the requests:
# machine 1:
curl -v 'https://<endpoint>.amazonaws.com/test/hello_world/test/a'
< HTTP/2 404
< date: Sat, 18 Dec 2021 09:21:42 GMT
< content-type: text/html;charset=utf-8
< content-length: 469
< server: SimpleHTTP/0.6 Python/3.7.10
< apigw-requestid: Kic2FiLIFiAEN_g=
<
--- response ---
# server:
192.168.9.6 - - [18/Dec/2021 09:15:05] "GET /test/hello_world/test/a HTTP/1.1" 404 -
(the 404 is expected, the important part is the request hitting the server)
I then tried to rewrite the request path to remove the leading /test using a parameter mapping: I specified "all incoming requests", Parameter to modify: path, Modification type: overwrite, Value: $request.path.proxy (the catch-all field defined in the route).
Now I get a 400 error, and the requests don't hit my server anymore:
# machine 1:
curl -v 'https://<endpoint>.amazonaws.com/test/hello_world/test/a'
< HTTP/2 400
< date: Sat, 18 Dec 2021 09:19:53 GMT
< content-type: text/html
< content-length: 122
< server: awselb/2.0
< apigw-requestid: KiclDhxXFiAEMhg=
<
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
</body>
</html>
# server:
-nothing-
When I map the $request.path.proxy to querystring.path, instead of path the requests hit the server:
# machine 1:
curl -v 'https://<endpoint>.amazonaws.com/test/hello_world/test/a'
< HTTP/2 404
< date: Sat, 18 Dec 2021 09:21:42 GMT
< content-type: text/html;charset=utf-8
< content-length: 469
< server: SimpleHTTP/0.6 Python/3.7.10
< apigw-requestid: Kic2FiLIFiAEN_g=
<
--- response ---
# server:
192.168.9.6 - - [18/Dec/2021 09:21:42] "GET /test/hello_world/test/a?path=hello_world%2Ftest%2Fa HTTP/1.1" 404 -
notice the value of the path query parameter is exactly the correct value which I would have wanted to replace the original requests path.
Is this a bug with AWS, or am I just missing some documentation, stating that you cannot rewrite path that way? Notably, when the {proxy+} path parameter is empty, requests get routed through correctly...
The problem was with the value of the path rewrite: It should have been /$request.path.proxy instead of $request.path.proxy.
I'd like to ban via fail2ban anyone generating both these type of lines in my nginx error.log file :
2019/12/15 20:12:12 [error] 640#640: *6 open() "/data/xxxxxx.com/www/50x.html" failed (2: No such file or directory), client: 35.187.45.148, server: xxxxxx.com, request: "GET /external.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: x.x.x.x
2019/12/16 13:42:59 [crit] 647#647: *41 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 35.233.78.55, server: xxxxxx.com, request: "GET /external.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "x.x.x.x"
I thought these lines would work :
open() .* client: < HOST >
connect() to .* client: < HOST >
But they apparently don't (tested with fail2ban-regex). Here's my complete filter :
[Definition]
failregex = open() .* client: < HOST >
connect() to .* client: < HOST >
FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: < HOST >
datepattern = {^LN-BEG}
Note : the last one (FastCGI...) does work. Would something be wrong with ".*" ?
Parentheses () are both regex metacharacters, meaning they have a special meaning in regex. For example, here is what your first regex is actually matching:
open .* client:
That is, the () are actually a zero-width capture group, and so are the same as matching nothing at all. Since you are trying to match open followed by a space, therefore you are failing to get a match. Here is the corrected version:
[Definition]
failregex = open\(\) .* client: < HOST >
connect\(\) to .* client: < HOST >
FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: < HOST >
datepattern = {^LN-BEG}
Note that if we want to match literal parentheses, we should escape them with backslash.
I am receiving a large number of request for PHP files that does not exist in my wordpress.
They show up in nginx error logs as following two examples:
2019/06/24 03:16:43 [error] 4201#4201: *17573871 FastCGI sent in stderr: "Unable to open primary script: /var/www/html/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php (No such file or directory)" while reading response header from upstream, client: 172.68.189.50, server: mywebsite.net, request: "GET /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "mywebsite.net"
2019/06/24 03:16:43 [error] 4201#4201: *17573871 FastCGI sent in stderr: "Unable to open primary script: /var/www/html/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php (No such file or directory)" while reading response header from upstream, client: 172.68.189.50, server: mywebsite.net, request: "POST /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "mywebsite.net"
I have tried making a noscript filter.
In file /etc/fail2ban/jail.local I put:
[nginx-noscript]
enabled = true
port = http,https
filter = nginx-noscript
logpath = /var/log/nginx/error.log
maxretry = 2
In File /etc/fail2ban/filter.d/nginx-noscript.conf I put:
[Definition]
failregex = \[error\] \d+#\d+: \*\d+ (FastCGI sent in stderr: "Unable to open primary script:)
ignoreregex =
But this filter is not catching these type of 404s. After systemctl restart fail2ban the fail2ban logs shows these error messages.
2019-06-24 16:11:05,548 fail2ban.filter [6182]: ERROR No failure-id group in '\[error\] \d+#\d+: \*\d+ (FastCGI sent in stderr: "Unable to open primary script:)'
2019-06-24 16:11:05,548 fail2ban.transmitter [6182]: WARNING Command ['set', 'nginx-noscript', 'addfailregex', '\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)'] has failed. Received RegexException('No failure-id group in \'\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)\'',)
2019-06-24 16:11:05,549 fail2ban [6182]: ERROR NOK: ('No failure-id group in \'\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)\'',)
What am I doing wrong. What will be the full regex for such nginx error logs.
This should work (for fail2ban >= 0.10):
failregex = ^\s*\[error\] \d+#\d+: \*\d+ FastCGI sent in stderr: "Unable to open primary script: [^"]*" while reading response header from upstream, client: <ADDR>
If you have older versions (0.9 or below), use <HOST> instead of <ADDR> (and better disable DNS-lookup for jail with usedns = no).
I'm working on a project that use Angular + Django(Django Rest Framework). During the development, the CORS support is done by using django-cors-headers, with CORS_ORIGIN_ALLOW_ALL = True and CORS_ALLOW_CREDENTIALS = True.
When I'm trying to send POST requests to create some resources in frontend (Angular), some pre-flight OPTIONS requests are sent by Chrome and responded successfully by backend server (python manage.py runserver), but others are not. These requests are canceled due to unknown reason, backend server logs indicate that requests are received and accepted by server, details are shown in fig below.
The headers of failed requests are shown below.
However, if a copy the content of the headers and try sending it with curl, it works as expected.
$ curl -v -X OPTIONS -H "Access-Control-Request-Headers: authorization,content-type" -H "Access-Control-Request-Method: POST" -H "DNS: 1" -H "Origin: http://localhost:4200" -H "Referer: http://localhost:4200" -H "User-Agent: Mozilla/5.0" http:/localhost:8000/api/user-permissions/
* Unwillingly accepted illegal URL using 1 slash!
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8000 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> OPTIONS /api/user-permissions/ HTTP/1.1
> Host: localhost:8000
> Accept: */*
> Access-Control-Request-Headers: authorization,content-type
> Access-Control-Request-Method: POST
> DNS: 1
> Origin: http://localhost:4200
> Referer: http://localhost:4200
> User-Agent: Mozilla/5.0
>
< HTTP/1.1 200 OK
< Date: Wed, 20 Feb 2019 02:47:39 GMT
< Server: WSGIServer/0.2 CPython/3.7.1
< Content-Type: text/html; charset=utf-8
< Content-Length: 0
< Vary: Origin
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Origin: http://localhost:4200
< Access-Control-Allow-Headers: accept, accept-encoding, authorization, content-type, dnt, origin, user-agent, x-csrftoken, x-requested-with
< Access-Control-Allow-Methods: DELETE, GET, OPTIONS, PATCH, POST, PUT
< Access-Control-Max-Age: 86400
<
* Connection #0 to host localhost left intact
Any ideas how this happen? Thx.
Sample Code:
// The method of the component that invokes the methods of PermissionService.
/** Update selected user's permissions. */
updatePermissions() {
const diff = this.diffPermissions();
const toBeCreated = diff[0];
const toBeDeleted = diff[1];
this.isLoading = true;
zip(
this.permissionService.createUserPermissions(toBeCreated),
this.permissionService.deleteUserPermissions(toBeDeleted),
).pipe(
map(() => true),
catchError((err: HttpErrorResponse) => {
alert(err.message);
return observableOf(false);
}),
).subscribe(succeed => {
this.isLoading = false;
});
}
// The methods of PermissionService that issue the HTTP requests.
createUserPermission(req: UserPermissionRequest) {
return this.http.post(`${environment.API_URL}/user-permissions/`, req);
}
createUserPermissions(reqs: UserPermissionRequest[]) {
// TODO(youchen): Evaluate the performance cost.
return forkJoin(reqs.map(req => this.createUserPermission(req)));
}
deleteUserPermission(permissionId: number) {
return this.http.delete(`${environment.API_URL}/user-permissions/${permissionId}/`);
}
deleteUserPermissions(permissionIds: number[]) {
// TODO(youchen): Evaluate the performance cost.
return forkJoin(permissionIds.map(id => this.deleteUserPermission(id)));
}
Found the cause: zip() with no parameters
In my case, I'm using zip to combine creations and deletions, see:
const createRequests = [c1, c2];
const deleteRequests = [d1, d2];
zip(
this.service.create(createRequests),
this.service.delete(deleteRequests),
)....
---
service.ts
create(reqs: CreateRequest[]) {
return zip(...reqs.map(req => this.createSingle(req));
}
delete(reqs: DeleteRequest[]) {
return zip(...reqs.map(req => this.deleteSingle(req));
}
But if one of the createRequests and deleteRequests is an empty list, this logic will go wrong. For example, if createRequests is empty while deleteRequests isn't, all HTTP requests fired by this.service.delete(deleteRequests) will be canceled due to an empty zip() is returned by this.service.create(createRequests).
Solution:
The solution to this problem is that we check the length of the reqs and return other observable instaned.
Fixed code:
create(reqs: CreateRequest[]) {
if (reqs.length === 0) return of([]);
return zip(...reqs.map(req => this.createSingle(req));
}
delete(reqs: DeleteRequest[]) {
if (reqs.length === 0) return of([]);
return zip(...reqs.map(req => this.deleteSingle(req));
}
I have a django app hosted via Nginx and uWsgi. In a certain very simple request, I get different behaviour for GET and POST, which should not be the case.
The uWsgi daemon log:
[pid: 32454|app: 0|req: 5/17] 127.0.0.1 () {36 vars in 636 bytes} [Tue Oct 19 11:18:36 2010] POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
[pid: 32455|app: 0|req: 5/18] 127.0.0.1 () {32 vars in 521 bytes} [Tue Oct 19 11:18:50 2010] GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ => generated 80 bytes in 3 msecs (HTTP/1.0 440) 1 headers in 76 bytes (0 async switches on async core 0)
The Nginx accesslog:
127.0.0.1 - - [19/Oct/2010:18:18:36 +0200] "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 0 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
127.0.0.1 - - [19/Oct/2010:18:18:50 +0200] "GET /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0" 440 80 "-" "curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15"
The Nginx errorlog:
2010/10/19 18:18:36 [error] 4615#0: *5 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /buy/76d4f520ae82e1dfd35564aed64a885b/a_2/10/ HTTP/1.0", upstream: "uwsgi://unix:sock/uwsgi.sock:", host: "localhost:9201"
In essence, Nginx somewhere loses the response if I use POST, not so if I use GET.
Anybody knows something about that?
Pass --post-buffering 1 to uwsgi
This will automatically buffer all the http body > 1 byte
The problem is raised by the way nginx manages upstream disconnections
I hit the same issue, but on my case I can't disable "uwsgi_pass_request_body" as most times (but not always) my app do need the POST data.
This is the workaround I found, while this issue is not fixed in uwsgi:
http://permalink.gmane.org/gmane.comp.python.wsgi.uwsgi.general/813
import django.core.handlers.wsgi
class ForcePostHandler(django.core.handlers.wsgi.WSGIHandler):
"""Workaround for: http://lists.unbit.it/pipermail/uwsgi/2011-February/001395.html
"""
def get_response(self, request):
request.POST # force reading of POST data
return super(ForcePostHandler, self).get_response(request)
application = ForcePostHandler()
I am facing the same issues. I tried all solutions above, but they were not working. Ignoring the response body in my case is simply not an option.
Apparently it is a bug with nginx and uwsgi when dealing with POST requests whose response is smaller than 4052 bytes
What solved it for me was adding "--pep3333-input" to the parameter list of uwsgi. After that all POSTs are returned correctly.
Versions of nginx/uwsgi I'm using:
$ nginx -V
nginx: nginx version: nginx/0.9.6
$ uwsgi --version
uWSGI 0.9.7
After a lucky find in further research (http://answerpot.com/showthread.php?577619-Several%20Bugs/Page2) I found something that helped...
Supplying the uwsgi_pass_request_body off; parameter in the Nginx conf resolves this problem...