Character encoding issue between logstash and logstash-forwarder - amazon-web-services

I have the following setup -
[logstash-forwarder nodes] -> [Amazon's elastic load balancer] -> [logstash nodes]
I start logstash-forwarder with the following config file -
{
"network": {
"servers": ["<Load_balancer_DNS_name>:443"],
"ssl key": "/etc/pki/private/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [ "-" ],
"fields": { "type": "stdin" }
}
]
}
And I start logstash with following settings -
input {
tcp {
port => "7286"
codec => plain {
charset => "UTF-8"
}
}
}
output {
stdout { }
elasticsearch {
host => "<cluster_node_ip>"
protocol => "http"
}
}
Now I feed some input at the command line from logstash-forwarder just to see if it is reachable perfectly at logstash. So when I type "Hello World" or just any other plain text on the logstash-forwarder side, I receive the following on logstash node, instead of the original text -
Received an event that has a different character encoding than you configured. {:text=>"1W\\u0000\\u0000\\u0000\\u00011C\\u0000\\u0000\\u0000ox^2ta```\\u0004bV fI\\xCB\\xCCI\\u0005\\xF1uA\\x9C\\x8C\\xFC\\xE2\\u0012 -\\x90Y\\xA0kh\\xA0kha\\xAAkdh\\xACkb\\u0006\\u0014c\\xCBOK+N\\u0005\\xC92\\u001A\\x80\\x94\\xE6d\\xE6\\x81\\xF4\\t\\xBBy\\xFA9\\xFAć\\xB8\\u0006\\x87\\xC4{{:9\\xFA9\\xDAۃ\\xA4K*\\v#Ҭ\\xC5%)\\x99y\\u0000\\u0000\\u0000\\u0000\\xFF\\xFF\\u0001\\u0000\\u0000\\xFF\\xFF\\u001A\\x93\\u0015\\xA2", :expected_charset=>"UTF-8", :level=>:warn}

logstash-forwarder uses a unique protocol to communicate with logstash, named 'lumberjack'.
You need to have the key & crt on the logstash server as well, and use a lumberjack input to handle it:
input {
lumberjack {
# The port to listen on
port => 7286
# The paths to your ssl cert and key
ssl_certificate => "path/to/logstash-forwarder.crt"
ssl_key => "path/to/logstash-forwarder.key"
# Set this to whatever you want.
type => "somelogs"
}
}
What you're seeing is the encrypted lumberjack messages.
https://github.com/elasticsearch/logstash-forwarder#use-with-logstash

Related

AWS SSM port forwarding : Not able to restrict port

How can I restrict the ports that is open for port forwarding in AWS SSM. I've cloned the publicly available SSM document AWS-StartPortForwardingSession.
Trying to edit the allowedPattern parameter from accepting the regular expression for all ports in between 1024 to 65535 to accept only 4 port numbers (3142,4200,121,1300).
I've tried using JSON array to specify the needed port numbers but it is gining the error
InvalidDocumentContent: JSON not well-formed. at Line: 15, Column: 25
The original SSM document content
{
"schemaVersion":"1.0",
"description":"Document to start port forwarding session over Session Manager",
"sessionType":"Port",
"parameters":{
"portNumber":{
"type":"String",
"description":"(Optional) Port number of the server on the instance",
"allowedPattern":"^([1-9]|[1-9][0-9]{1,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$",
"default": "80"
},
"localPortNumber":{
"type":"String",
"description":"(Optional) Port number on local machine to forward traffic to. An open port is chosen at run-time if not provided",
"allowedPattern":"^([1-9]|[1-9][0-9]{1,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$",
"default": "0"
}
},
"properties":{
"portNumber":"{{ portNumber }}",
"type":"LocalPortForwarding",
"localPortNumber":"{{ localPortNumber }}"
}
}
The code that I've cloned, edited and which is not working
{
"schemaVersion":"1.0",
"description":"Document to start port forwarding session over Session Manager",
"sessionType":"Port",
"parameters":{
"portNumber":{
"type":"String",
"description":"(Optional) Port number of the server on the instance",
"allowedPattern":"^([1-9]|[1-9][0-9]{1,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$",
"default": "80"
},
"localPortNumber":{
"type":"String",
"description":"(Optional) Port number on local machine to forward traffic to. An open port is chosen at run-time if not provided",
"allowedPattern": ["9200","9042","13000","389"],
"default": "0"
}
},
"properties":{
"portNumber":"{{ portNumber }}",
"type":"LocalPortForwarding",
"localPortNumber":"{{ localPortNumber }}"
}
}
The problem you are having is because you are specifying a list instead of a pattern. Try this regex:
"(3142|4200|121|1300)"
To be clear, the quotes are not part of the regex, the entire line above is a string value for your AllowedPattern

OpenSearch on AWS does not recognise GeoIP's location as GEOJSON type

I've got logstash processing logs and uploading to an opensearch instance running on AWS as a service.
I've added a geoip filter to my logstash to process IPs into geographic data. According to the docs, the geoip filter should generate a location field that contains lon and lat and that should be recognised as a geo_point type which can then be used to populate map visualisations.
I've been trying for a couple of hours now but opensearch always splits the location field into the numbers location.lon and location.lat instead of just recognising location as geo_point, hence I cannot use it for map visualisations.
Here is my logstash config:
input {
file {
...
codec => json {
target => "[log_message]"
}
}
}
filter {
...
geoip {
source => "[log_message][forwarded_ip_address]"
}
}
output {
...
opensearch {
...
ecs_compatibility => disabled
}
}
The template on my opensearch instance is the standard one, so it does contain this:
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "half_float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "half_float"
}
}
},
I am not sure if this is relevant but AWS OpenSearch requires the ECS compatibility to be set as disabled, which I did.
Has somebody managed to do this successfully on AWS OpenSearch?
Have you tried to set the location field as geo_point type in the index before ingesting the data? I don't think opensearch detects the geo_point type automatically.

Deploy Multiple full-stack React + Express applications with path routing on ELB

I currently have four containerized React + Express applications (port 3001 exposed) sitting on four individual ECS instances with four different CNAMEs. They each sit behind their own nginx service as a reverse proxy.
Given that the number of applications may increase, I'm hoping to re-deploy this on ECS with an ELB, but I'm running into problems with the path routing. My goal is to have a system where <url_name>/service-1 will route traffic to the container referenced in service-1 and so on.
Currently, the services are all in a running state, but the routing provides a bunch of console errors saying the static js and css files produced by the react build command cannot be found at <url_name>/. Has anyone found a way to run multiple full-stack React + Express applications with path routing on ELB or a workaround of adding in an Nginx service or updating the React homepage to a fixed value?
# container_definition
[
{
"name": "service-1",
"image": "<image-name>:latest",
"cpu": 256,
"memory": 256,
"portMappings": [
{
"containerPort": 3001,
"hostPort": 3001
}
],
"essential": true
}
]
# rule.json
{
"ListenerArn": "placeholder",
"Conditions": [
{
"Field": "path-pattern",
"Values": [
"/service-1*"
]
}
],
"Priority": 1,
"Actions": [
{
"Type": "forward",
"TargetGroupArn": "placeholder"
}
]
}
# server.js
const express = require('express'),
path = require('path');
const createServer = function () {
const port = 3001;
// Create a new Express server
const app = express(),
subpath = express();
// Ensure every api route is prefixed by /api
app.use('/api', subpath);
// All routes related to access points
const users = require('./routes/users');
subpath.get('/users/:id', users.getUserById);
// serve up static assets, i.e. HTML, CSS, and JS from /build
app.use(express.static('build'));
if (process.env.NODE_ENV)
app.get('*', (req, res) => res.sendFile(path.join(__dirname + '/build/index.html')));
// Start the HTTP listener using the given port
return app.listen(port, () => console.log(`Express server (HTTP) listening on port ${port}`))
};
module.exports = createServer;

Configure logstash to read logs from Amazon S3 bucket

I have been trying to configure logstash to read logs which are getting generated in my amazon S3 bucket, but have not been successful. Below are the details :
I have installed logstash on an ec2 instance
My logs are all gz files in the s3 bucket
The conf file looks like below :
input {
s3 {
access_key_id => "MY_ACCESS_KEY_ID"
bucket => "MY_BUCKET"
region => "MY_REGION"
secret_access_key => "MY_SECRET_ACESS_KEY"
prefix => "/"
type => "s3"
add_field => { source => gzfiles }
}
}
filter {
if [type] == "s3" {
csv {
columns => [ "date", "time", "x-edge-location", "sc-bytes", "c-ip", "cs-method", "Host", "cs-uri-stem", "sc-status", "Referer", "User-Agent", "cs-uri-query", "Cookie", "x-edge-result-type", "x-edge-request-id" ]
}
}
if([message] =~ /^#/) {
drop{}
}
}
output {
elasticsearch {
host => "ELASTICSEARCH_URL" protocol => "http"
}
}

server selected unsupported version 301

I have installed lOgstash,ELK and KIbana and all 3 working fine.
below is my logstash.conf
input {
lumberjack {
port => "5000"
type => "common-logging-access"
ssl_certificate => "C:/Sunil/HSL/SSL/logstash-forwarder.crt"
ssl_key => "/Myfolder/SSL/logstash-forwarder.key"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
add_field => ["systemName","common-logging-app"]
}
dns {
reverse => [ "host" ]
action => replace
}
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
}
}
and below is logstash-forwarder.conf.
{
"network": {
"servers": [ "127.0.0.1:5000" ],
"ssl certificate": "/Myfolder/SSL/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/logs/common-logging/*.log"
],
"fields": { "type": "commonUiLogs" }
}, {
"paths": [ "/var/logs/Logstash/elasticsearch-1.3.4/logs/*.log"],
"fields": { "type": "apache" }
}
]
}
certificate is created using
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt -days 365
When I run forwarder using command logstash-forwarder -config logstash-forwarder.conf
it shows error
2015/01/12 16:38:03.509240 Connecting to [127.0.0.1]:5000 (127.0.0.1)
2015/01/12 16:38:03.511240 Failed to tls handshake with 127.0.0.1 tls: server selected unsupported protocol version 301
I am using below versions
logstash-1.4.2
elasticsearch-1.3.4
kibana-3.1.1
I am using WIndows 7 64 bit machine.
Please help me on this.
regards,
Sunil.
Logstash server is now offering tls protocol version which is now noticed to be unsecure. Please upadete java which runs the logstash instance to latest version.