Logstash conf file for parsing django exceptions - django

I have been trying to use logstash, elastic search, and Kibana for monitoring my django server.
I have set the conf file as given below
input {
tcp { port => 5000 codec => json }
udp { port => 5000 type => syslog }
}
output {
elasticsearch_http {
host => "127.0.0.1"
port => 9200
}
stdout { codec => rubydebug }
}
But the messages logged are too lengthy and could not find a method to parse it.
Any help is appreciated

As far as I can tell, there is not a pattern or built-in that will directly parse Django exceptions.
You need to tell the forwarding agent to target the Django log files that you're generating, marking them as "type": "django".
Then, on the Logstash server, you can use the following:
pattern:
DJANGO_LOGLEVEL (DEBUG|INFO|ERROR|WARNING|CRITICAL)
DJANGO_LOG %{DJANGO_LOGLEVEL:log_level}\s+%{TIMESTAMP_ISO8601:log_timestamp}\s+%{TZ:log_tz}\s+%{NOTSPACE:logger}\s+%{WORD:module}\s+%{POSINT:proc_id}\s+%{GREEDYDATA:content}
filter:
filter {
if [type] == "django" {
grok {
match => ["message", "%{DJANGO_LOG}" ]
}
date {
match => [ "timestamp", "ISO8601", "YYYY-MM-dd HH:mm:ss,SSS"]
target => "#timestamp"
}
}
}
if you don't want to add the pattern file, you can expand the DJANGO_LOGLEVEL pattern into the %{DJANGO_LOGLEVEL:log_level} field and place the targeting rule that follows DJANGO_LOG into the grok match placeholder.

Related

Logstash grok filter custom pattern is not working

I've a log file (http://codepad.org/vAMFhhR2), and I want to extract a specific number out of it (line 18)
I wrote a custom pattern grok filter, tested it on http://grokdebug.herokuapp.com/, it works fine and extracts my desired value.
here's how logstash.conf looks like:
input {
tcp {
port => 5000
}
}
filter {
grok{
match => [ "message", "(?<scraped>(?<='item_scraped_count': ).*(?=,))" ]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
but it doesn't match any record from the same log on Kibana
Thoughts?
Your regexp may be valid but the lookahead and lookbehind ("?=" and "?<=") are not a good choice in this context. Instead you could use a much simpler filter:
match => [ "message", "'item_scraped_count': %{NUMBER:scraped}" ]
This will extract the number after 'item_scraped_count': as a field called scraped, using the 'NUMBER' Grok built-in pattern.
Result in Kibana:
{
"_index": "logstash-2017.04.11",
"_type": "logs",
"_source": {
"#timestamp": "2017-04-11T20:02:13.194Z",
"scraped": "22",
(...)
}
}
If I may suggest another improvement: since your message is spread across multiple lines you could easily merge it using the multiline input codec:
input {
tcp {
port => 5000
codec => multiline {
pattern => "^(\s|{')"
what => "previous"
}
}
}
This will merge all the lines starting with either a whitespace or {' with the previous one.

How to format date in Logstash Configuration

I am using logstash to parse log entries from an input log file.
LogLine:
TID: [0] [] [2016-05-30 23:02:02,602] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 572ms {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService}
Grok Pattern:
TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}
My grok pattern is working fine. I am sending these parse entries to an rest base api made by myself.
Configurations:
output {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","CorrelationId","986565","Severity","NORMAL","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{Message}"]
}
}
In the current output, I am getting the date as it is parsed from the logs:
Current Output:
{
"TimeStamp": "2016-05-30 23:02:02,602"
}
Problem Statement:
But the problem is that my API is not expecting the date in such format, it is expecting the date in generic xsd type i.e datetime format. Also, as mentioned below:
Expected Output:
{
"TimeStamp": "2016-05-30T23:02:02:602"
}
Can somebody please guide me, what changes I have to add in my filter or output mapping to achieve this goal.
In order to transform
2016-05-30 23:02:02,602
to the XSD datetime format
2016-05-30T23:02:02.602
you can simply add a mutate/gsub filter in order to replace the space character with a T and the , with a .
filter {
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}

Logstash skipping to many log lines

I setup logstash on my windows box. I run 7 instances of logstash on it. Each one has a folder with log files for input. I ran them at the same time and directed it to AWS es cluster running 7 instances of r3.xlarge and 3 master nodes (r3.xlarge). All input files combined take around 9GB. After all the logstash instances stopped running, I only had 6 million events in elasticsearch, there should be around 30 million. I went back to my one of my logstash cmd windows where I ran it and looked at the last event. It did not correspond to the last log line in the file where it took it from, it was like ~50th line towards the bottom. Then the second to last log event in the same window corresponded not to the log line right before the one I looked up first, I found it about 30 log lines above it in the log file. So it is apparent my logstash is skipping log lines.
Now I checked my elastic search and it shows all zeros, so nothing got dropped? (I looked at bulk.rejected in particular)
_cat/thread_pool?v
Is this data cumulative or does it get refreshed?
Which brings me to my second question. If logstash itself dropped the log lines for some reason, where and how can i troubleshoot it, I know that none of my logstash instances crashed. All I know is it happily dropped 70% of all my logs and I have no error log or clue to go by as to what happened.
Edit:
My logstash configuration:
(It is like it is ingoring all my logs for Friday, Sat and Sun, and just processing for Monday (3/21))
input {
file {
type => "apache_logs"
path => "D:/logs/apache_logs/all/ssl_access.*"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
grok {
match => ["message","%{IPORHOST:client_ip} (?<username>[-]) (?<password>[-]) \[(?<timestamp>\d{2}[/][a-zA-Z]{3}[/]\d{4}:\d{2}:\d{2}:\d{2}\s-\d{0,4})\] \"%{GREEDYDATA:request}\" %{NOTSPACE:obssocookie} %{NOTSPACE:ps_sso_uid_in} %{NOTSPACE:ps_sso_uid_out} (?<status>[0-9]{3}) (?<bytes>[0-9]{1,}|-) %{NOTSPACE:protocol} %{NOTSPACE:ciphers} \"%{GREEDYDATA:referrer}\" \"%{GREEDYDATA:user_agent}\""]
match => [ "path", "(?<app_node>webpr[0-9]{2}[a-z]{0,1})" ]
add_field => { "server_node" => "%{app_node}" }
break_on_match => false
}
mutate {
gsub => ["obssocookie","^.*=",""]
}
mutate {
gsub => ["ps_sso_uid_in","^.*=",""]
}
mutate {
gsub => ["ps_sso_uid_out","^.*=",""]
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "timestamp"
}
geoip {
source => "client_ip"
}
if [geoip] {
mutate {
add_field => {
"ip_type" => "public"
}
}
} else {
mutate{
add_field => {
"ip_type" => "private"
}
}
}
}
output {
stdout{ codec => rubydebug}
amazon_es {
hosts => ["apache-logs-xxxxxxxxxxxxxxxxxxxxxxxxxx.us-west-2.es.amazonaws.com"]
region => "us-west-2"
aws_access_key_id => 'xxxxxxxxxxxxxxxxxx'
aws_secret_access_key => 'xxxxxxxxxxxxxxxxxxxxx'
index => "logstash-apache-friday"
}
}
How can I know how many events logstash dropped specifically, not how many elastic search rejected, because I already checked through the API and bulk.rejected=0
Found my culprit. I have to include this in my files input, looks like it skips any files older than 24 hours by default
ignore_older => 0
Kind of surprising, I would expect to add settings when I want to narrow my input, otherwise logstash should process any files, older than 24 hours or not. Really not something that was obvious..

GROK Pattern Works with GROK Debugger but not in Logstash GROK

I have a GROK pattern I am trying to use in Logstash that works within the GROK Debugger website but not within Log stash. I've tried different configurations with no success. I'm hoping someone can help me identify why this is not working.
Input: 2015-04-15 12:43:23.788 1883 AUDIT nova.compute.resource_tracker [-] Free disk (GB): -7
Search Pattern: Free disk \(GB\): \-%{INT:auth_method}
I want to extract the value 7
Thanks for your help!!!!
Hate to say it, OP, but it works for me:
input {
stdin {}
}
filter {
grok {
match => [ message, "Free disk \(GB\): \-%{INT:auth_method}" ]
}
}
output {
stdout { codec => rubydebug }
}
Gives you this:
2015-04-15 12:43:23.788 1883 AUDIT nova.compute.resource_tracker [-] Free disk (GB): -7
{
"message" => "2015-04-15 12:43:23.788 1883 AUDIT nova.compute.resource_tracker [-] Free disk (GB): -7",
"#version" => "1",
"#timestamp" => "2015-04-16T15:57:17.229Z",
"host" => "0.0.0.0",
"auth_method" => "7"
}
Check for extra spaces at the end of your pattern, perhaps?

good resources for grok patterns for python log file

I want to use logtash for parsing python log files , where can i find the resources that help me in doing that. For example:
20131113T052627.769: myapp.py: 240: INFO: User Niranjan Logged-in
In this I need to capture the time information and also some data information.
I had exactly the same problem/need. I couldn't really find a solution to this. No available grok patterns really matched the python logging output, so I simply went ahead and wrote a custom grok pattern which I've added naively into patterns/grok-patterns.
DATESTAMP_PYTHON %{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND},%{INT}
The logstash configuration I wrote gave me nice fields.
#timestamp
level
message
Added some extra field which I called pymodule which should show you the python module that was producing the log entry.
My logstash configuration file looks like this (ignore the sincedb_path this is simple a manner of forcing logstash to read the entire log file everytime you run it):
input {
file {
path => "/tmp/logging_file"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => [
"message", "%{DATESTAMP_PYTHON:timestamp} - %{DATA:pymodule} - %{LOGLEVEL:level} - %{GREEDYDATA:logmessage}" ]
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/Luxembourg"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output {
stdout {
codec => json
}
}
Please note that
I give absolutely no guarantee that this is the best or even an
slightly acceptable solution.
Our Python log file has a slightly different format:
[2014-10-08 19:05:02,846] (6715) DEBUG:Our debug message here
So I was able to create a configuration file without any need for special patterns:
input {
file {
path => "/path/to/python.log"
start_position => "beginning"
}
}
filter {
grok {
match => [
"message", "\[%{TIMESTAMP_ISO8601:timestamp}\] \(%{DATA:pyid}\) %{LOGLEVEL:level}\:%{GREEDYDATA:logmessage}" ]
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/London"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output {
elasticsearch {
host => localhost
}
stdout {
codec => rubydebug
}
}
And this seems to work fine.