Remove HTTP headers from Prometheus in Zabbix - regex

I have a server that has a Nginx VTS module installed on it, which outputs metrics in prometheus format.
When I try to actively check web.page.get via Zabbix I get the HTTP header and then the data in the format below:
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 24 Sep 2020 09:16:20 GMT
Content-Type: text/plain
Content-Length: 33769
Connection: close
Vary: Accept-Encoding
# HELP nginx_vts_info Nginx info
# TYPE nginx_vts_info gauge
nginx_vts_info{hostname="example",version="1.18.0"} 1
# HELP nginx_vts_start_time_seconds Nginx start time
# TYPE nginx_vts_start_time_seconds gauge
nginx_vts_start_time_seconds 1600367492.145
# snip output...
I wrote a regular expression that removes the header but only outputs the first line:
# \n\s?\n(.*)
# HELP nginx_vts_info Nginx info
How do I rewrite the expression so that the header is removed and the rest of the data is available?

Please try below regex
\n\s?\n([\s\S]*)
in regex . wont check newlines unless specific flags set. hence in your example, only the first line was returned. so rewriting it to include newlines as well will help.

Related

Grok pattern to extract specific data only from log

I am trying to extract some specific data from the postgresql logs using the grok parsing rules in datadog. I am trying to extract the following in json format from the logs below
{
dbuser {
AROAXXXXXXXXXXXXXXXXX : username
}
}
Logs from which I am trying to extract the above information
2022-11-11 09:09:15 UTC:10.116.0.244(57888):AROAXXXXXXXXXXXXXXXXX:username#database_name:[592]:LOG: AUDIT: SESSION,3016,1,READ,SELECT,,,"/*pga4dash*/
2022-11-11 09:20:53 UTC:10.116.0.244(57946):AROAXXXXXXXXXXXXXXXXX:username#database_name:[7696]:LOG: pam_authenticate failed: Permission denied
2022-11-11 09:27:02 UTC:10.116.0.244(57984):AROAXXXXXXXXXXXXXXXXX:username#database_name:[8328]:LOG: AUDIT: SESSION,1,1,ROLE,ALTER ROLE,,,ALTER USER app_user SET pgaudit.log TO 'NONE';,<not logged>
2022-11-11 09:21:57 UTC:10.117.0.98(44764):AROAXXXXXXXXXXXXXXXXX:username#database_name:[2873]:FATAL: pg_hba.conf rejects connection for host "10.117.0.98", user "AROAXXXXXXXXXXXXXXXXX:username", database "database_name", SSL off
* Trying 127.0.0.1:1108...
* Connected to rdsauthproxy (127.0.0.1) port 1108 (#0)
> POST /authenticateRequest HTTP/1.1
Host: rdsauthproxy:1108
Accept: */*
Content-Length: 1884
Content-Type: multipart/form-data; boundary=------------------------1b12ee5d61245d84
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
<
* Connection #0 to host rdsauthproxy left intact
What I have tried and achieved so far
This is what I have tried so far in terms of generalisation which should work on all logs but this gives me no output.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{data:dbuser:keyvalue(":")}
If I use the following then it gives me the desired output but only works for first pattern of log that I have mentioned above.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{data:dbuser:keyvalue(":")}:\[592\]\:LOG\:\s+AUDIT\:\s+SESSION,%{integer:},1,READ,SELECT,,,\"%{notSpace:}%{data}
If there is a way to ignore all the logs and only just extract the exact match then please help me out.
So I was able to figure out the solution for the above question. The following is the parse rule I used which help me achieve what I wanted.
%{date("yyyy-MM-dd HH:mm:ss z"):}\:%{ipv4:}\(%{number:}\)\:%{word}:%{data:database.username}:%{data}

Does SNS HTTP/S delivery honor any HTTP codes?

I created a test to fill my SNS dead letter queue to help me develop code to read from this queue. Long story short, I thought an HTTP error would be easiest to simulate failures, but surprisingly, they seem to be counted as success.
In case I am doing it wrong and for the benefit of anyone else who wants to try this out, here is my methodology. I created an HTTP/s endpoint specifically for this test using a bash one liner:
while true; do echo -e "HTTP/1.1 200 OK\n" | nc -Nl 9078; echo "" && date; done
So far so good. I decided that returning a 401 code might be easiest. Capturing a 401 page output with netcat:
HTTP/1.1 401 Unauthorized
Server: nginx/1.21.0
Date: Wed, 01 Sep 2021 12:22:03 GMT
Content-Type: text/html
Content-Length: 179
Connection: keep-alive
WWW-Authenticate: Basic realm="Restricted example.com"
Strict-Transport-Security: max-age=31536000
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.21.0</center>
</body>
</html>
I altered my one liner accordingly:
while true; do echo -e "$(cat 401error)\n" | nc -Nl 9078; echo "" && date; done
I verified that visiting this page in Firefox would pop up a password dialog.
Come test time, SNS blunders along and delivers the message without fear. The message never appears in the DLQ:
POST /poot/testingevent HTTP/1.1
x-amz-sns-message-type: Notification
x-amz-sns-message-id: REDACTED
x-amz-sns-topic-arn: REDACTED
x-amz-sns-subscription-arn: REDACTED
x-amz-sns-rawdelivery: true
Content-Length: 24
Content-Type: text/plain; charset=UTF-8
Host: example.com:9078
Connection: Keep-Alive
User-Agent: Amazon Simple Notification Service Agent
Accept-Encoding: gzip,deflate
{"401 for sure man": 11}
Wed Sep 1 12:25:31 UTC 2021
Does anyone know? Nothing so far uncovered in duckduckgoing "http code" sns. If I can capture some other codes (403,500,etc) using netcat, I thought it might be useful to know which, if any, are honored.
Any status code outside of the range 200 - 499 will be considered as a failure and retried according to your retry policy as per https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html. Once the max number of retries has been exhausted, the message will be delivered to a DLQ if one is configured.

HTTP/1.1 401 Unauthorized in Response Headers in Load runner for GET Requests

I am new to Load runner , Am facing am issue while play back of the script
LR 12.50
O.S Windows 7 SP2
Protocol is Mobile HTTP/HTML
Recording mode is Proxy
Let me explain my scenario
While executing following function:
web_custom_request("authenticate",
"URL=https://ws-xx.xxx.com/tcs/rest/authenticate?include=user,company",
"Method=POST",
"Resource=0",
"RecContentType=application/json",
"Referer=",
"Snapshot=t1.inf",
"Mode=HTTP",
"EncType=application/json",
"Body={\"password\":\"xxx\",\"username\":\"xxx\",\"version\":\"1.0.40\"}",
LAST);
For the above POST method , am getting response as below
HTTP/1.1 200 OK\r\n
Date: Tue, 13 Oct 2015 19:19:21 GMT\r\n
Server: Apache-Coyote/1.1\r\n
Content-Type: application/json\r\n
Set-Cookie: dtCookie=DBE9311E44E5C47902702DC762030583|TXlBcHB8MQ; Path=/;
Domain=.xxx.com\r\n
Connection: close\r\n
Transfer-Encoding: chunked\r\n
Which is fine ,Now the second custom request is shown below
web_custom_request("profiles",
"URL=https://ws-test.xxx.com/tcs/rest/profiles",
"Method=GET",
"Resource=1",
"RecContentType=application/json",
"Referer=",
"Snapshot=t2.inf",
LAST);
For the above GET requests in the replay logs am getting:
401 unauthorized error.
GET /tcs/rest/profiles HTTP/1.1\r\n
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT)\r\n
Accept: */*\r\n
Connection: Keep-Alive\r\n
Host: ws-test.xxx.com\r\n
Cookie: dtCookie=DBE9311E44E5C47902702DC762030583|TXlBcHB8MQ\r\n
\r\n
t=5921ms: 172-byte response headers for "https://ws-test.xxx.com/tcs/rest/profiles" (RelFrameId=1, Internal ID=2)
HTTP/1.1 401 Unauthorized\r\n
Date: Tue, 13 Oct 2015 19:19:22 GMT\r\n
Server: Apache-Coyote/1.1\r\n
Content-Type: application/json\r\n
Connection: close\r\n
Transfer-Encoding: chunked\r\n
\r\n
t=5922ms: 4-byte chunked response overhead for "https://ws-test.xxx.com/tcs/rest/profiles" (RelFrameId=1, Internal ID=2)
8b\r\n
t=5923ms: 139-byte chunked response body for "https://ws-test.xxx.com/tcs/rest/profiles" (RelFrameId=1, Internal ID=2)
{"errors":[{"message":"Authentication required to access endpoint","status":"401","code":"
NotAuthenticated","header":"Not Authenticated"}]}
I refereed this link.
My understanding from the above custom request , login is success but the next
subsequent requests are getting failed.
I have used web_cleanup_cookies() function but didn't solve the issue .
I tried to capture the Cookie ID using the below function
web_reg_save_param("COOKIE_ID",
"LR= Cookie: dtCookie=" ,
"RB= |TXlBcHB8MQ\r\n",
"Ord=All",
"RelFrameId=1",
"Search=All",
LAST);
web_add_header("Cookie",lr_eval_string("{COOKIE_ID}"));
Now question is where to place parameter "COOKIE_ID" in my script while there is
no value in script for COOKIE_ID?
How to handle this issue ? Can anybody please help me .
Please add below headers to the script
web_set_sockets_option("SSL_VERSION","TLS");
web_set_user("username", "password", "domain:portno" );
web_set_sockets_option("INITIAL_BASIC_AUTH","1");
In Vugen, Select snapshot view and compare both record and replay requests, suspecting there might be a missing of header in replay request.
If cookie is the only thing changing you can add it by using web_add_cookie function.

Jmeter-Regular expression extractor

My Jmeter response returns me 'Location' in the response header.I want to fetch this Location header and use it on my other requests.
Sample Start: 2015-07-24 14:46:38 CEST
Load time: 163
Latency: 163
Size in bytes: 372
Headers size in bytes: 350
Body size in bytes: 22
Sample Count: 1
Error Count: 1
Response code: 201
Response message: Processed
Response headers:
HTTP/1.1 201 Processed
X-Backside-Transport: OK OK,FAIL FAIL
Connection: Keep-Alive
Transfer-Encoding: chunked
****Location: /retail/iows/ie/en/storage/servicedocs/paxplanner/2015-07-24/eCommerce.pdf****
X-Client-IP: 127.0.0.1,10.62.26.150
Content-Type: application/octet-stream
Date: Fri, 24 Jul 2015 12:46:38 GMT
X-Archived-Client-IP: 127.0.0.1
Steps I followed:
I have used Regular expression extractor.
Enabled response header radio button with the whole location header.
Please help me to sort it out.
If you want to retrieve the Location field's value from the request's response, you might want to try the following pattern: Location:([^\r?\n]+), the first matching group will contain the value of the Location field.
Above expression is based in the following rules:
HTTP header fields are colon (":") separated <key, value> pairs.
HTTP header fields are terminated by the EOL char combination (CR and LF)
Please try this..
Location:([\s\S]*)X-Client
If it doesn't work then try to use a \ before - in X-Client (escaping -)

Serving 206 Byte-Range through Nginx, Django

I have Nginx serving my static Django files which is being run on Gunicorn. I am trying to serve MP3 files and get them to have the head 206 so that they will be accepted by Apple for podcasting. At the moment the audio files are in my static directory and are served straight through Nginx. This is the response i get:
HTTP/1.1 200 OK
Server: nginx/1.2.1
Date: Wed, 30 Jan 2013 07:12:36 GMT
Content-Type: audio/mpeg
Content-Length: 22094968
Connection: keep-alive
Last-Modified: Wed, 30 Jan 2013 05:43:57 GMT
Can someone help with the correct way to serve mp3 files so that byte-ranges will be accepted.
Update: This is the code in my view that serves the file through Django
response = HttpResponse(file.read(), mimetype=mimetype)
response["Content-Disposition"]= "filename=%s" % os.path.split(s)[1]
response["Accept-Ranges"]="bytes"
response.status_code = 206
return response
If you want to do this only in nginx then in your location directive which is responsible for serving static .mp3 files add those directives:
# here you add response header "Content-Disposition"
# with value of "filename=" + name of file (in variable $request_uri),
# so for url example.com/static/audio/blahblah.mp3
# it will be /static/audio/blahblah.mp3
# ----
set $sent_http_content_disposition filename=$request_uri;
# or
add_header content_disposition filename=$request_uri;
# here you add header "Accept-Ranges"
set $sent_http_accept_ranges bytes;
# or
add_header accept_ranges bytes;
# tell nginx that final HTTP Status Code should be 206 not 200
return 206;
There is something in your config that prevent nginx from supporting range requests for these static files. When using standard nginx modules, this may be one of the following filters (these filters modify responses, and byte-range handling is disabled if modification may happen during request body handling):
gzip
gunzip
addition filter
ssi
All these modules have directives to control MIME types they work with (gzip_types, gunzip_types, addition_types, ssi_types). By default, they are set to restrictive sets of MIME types, and range requests works fine for most static files even if these modules are enabled. But placing something like
ssi on;
ssi_types *;
into a configuration will disable byte-range support for all static files affected.
Check your nginx configuration and remove offending lines, and/or make sure to switch off modules in question for a location you serve your mp3 files from.
You can define your own status code:
response = HttpResponse('this is my response data')
response.status_code = 206
return response
If you are using Django 1.5 you may want to have a look at the new StreamingHttpResponse:
https://docs.djangoproject.com/en/dev/ref/request-response/#streaminghttpresponse-objects
This can be very helpful for big files.