server selected unsupported version 301 - logstash-forwarder

I have installed lOgstash,ELK and KIbana and all 3 working fine.
below is my logstash.conf
input {
lumberjack {
port => "5000"
type => "common-logging-access"
ssl_certificate => "C:/Sunil/HSL/SSL/logstash-forwarder.crt"
ssl_key => "/Myfolder/SSL/logstash-forwarder.key"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
add_field => ["systemName","common-logging-app"]
}
dns {
reverse => [ "host" ]
action => replace
}
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
}
}
and below is logstash-forwarder.conf.
{
"network": {
"servers": [ "127.0.0.1:5000" ],
"ssl certificate": "/Myfolder/SSL/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/logs/common-logging/*.log"
],
"fields": { "type": "commonUiLogs" }
}, {
"paths": [ "/var/logs/Logstash/elasticsearch-1.3.4/logs/*.log"],
"fields": { "type": "apache" }
}
]
}
certificate is created using
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt -days 365
When I run forwarder using command logstash-forwarder -config logstash-forwarder.conf
it shows error
2015/01/12 16:38:03.509240 Connecting to [127.0.0.1]:5000 (127.0.0.1)
2015/01/12 16:38:03.511240 Failed to tls handshake with 127.0.0.1 tls: server selected unsupported protocol version 301
I am using below versions
logstash-1.4.2
elasticsearch-1.3.4
kibana-3.1.1
I am using WIndows 7 64 bit machine.
Please help me on this.
regards,
Sunil.

Logstash server is now offering tls protocol version which is now noticed to be unsecure. Please upadete java which runs the logstash instance to latest version.

Related

How to setup hyperledger fabric explorer | amazon managed blockchain

I setup hyperledger fabric network using amazon managed blockchain by following this guide. Everything works properly in the hyperledger network. Now I want to setup hyperledger explorer. I can not find any amazon's official document to setup hyperledger fabric explorer. So I am following this article. As author's suggestion, I cloned this repo. I have done everything as the author said in this article. Now I need to edit first-network.json file. I edited the first-network.json file, as the following,
{
"name": "first-network",
"version": "1.0.0",
"license": "Apache-2.0",
"client": {
"tlsEnable": true,
"adminUser": "admin",
"adminPassword": "adminpw",
"enableAuthentication": false,
"organization": "m-QMD*********6HK",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"peers": {
"nd-JEFEX**************N4": {}
},
"connection": {
"timeout": {
"peer": {
"endorser": "6000",
"eventHub": "6000",
"eventReg": "6000"
}
}
}
}
},
"organizations": {
"Org1MSP": {
"mspid": "m-QMD*********6HK",
"fullpath": true,
"adminPrivateKey": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/1bebc656f198efb4b5bed08ef42cf3b2d89ac86f0a6b928e7a172fd823df0a48_sk"
},
"signedCert": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem"
}
}
},
"peers": {
"nd-JEFEX**************N4": {
"tlsCACerts": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"eventUrl": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"grpcOptions": {
"ssl-target-name-override": "nd-JEFEX**************N4"
}
}
}
}
My question is what should I add in the place of adminPrivateKey-path, signedCert-path, tlsCACerts-path.
Here is my list of available files generated while setting up hyperledger hyperledger fabric in amazon managed blockchain.
/home/ec2-user/admin-msp$ ls * -r
user:
signcerts:
cert.pem
keystore:
fd84a**********************1f03ff_sk
cacerts:
ca-m-*****-n-*****-managedblockchain-us-east-1-amazonaws-com-30002.pem
admincerts:
cert.pem
Help me to setup hyperledger fabric explorer for my hyperledger fabric network.
You should configure your connection profile as below:
"organizations": {
"Org1MSP": {
"mspid": "m-QMD*********6HK",
"fullpath": true,
"adminPrivateKey": {
"path": "/home/ec2-user/admin-msp/keystore/fd84a**********************1f03ff_sk"
},
"signedCert": {
"path": "/home/ec2-user/admin-msp/signcerts/cert.pem"
}
}
},
"peers": {
"nd-JEFEX**************N4": {
"tlsCACerts": {
"path": "/home/ec2-user/admin-msp/cacerts/ca-m-*****-n-*****-managedblockchain-us-east-1-amazonaws-com-30002.pem"
},
"url": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"grpcOptions": {
"ssl-target-name-override": "nd-JEFEX**************N4"
}
}
}
And I recommend to use the latest Explorer because commit for AWS managed blockchain service and many other bugfixes were committed recently (Making Hyperledger Explorer compatible to Amazon Managed Blockchain N… · hyperledger/blockchain-explorer#7b30821)

Ho to fix aws-cli cloudfront update distribution command?

I have been trying to execute below command but it resulted in an error
aws cloudfront update-distribution --id E29BDBENPXM1VE \
--Origins '{ "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}'
ERROR::: Unknown options: { "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}, --Origins
I have to remove cloudfront : OriginSslProtocols:SSLv3
aws cloudfront update-distribution --id E29BDBENPXM1VE \
--Origins '{ "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}'
1) How to fix above code,if not possible if there any command other than below command to disable/remove OriginSslProtocols:SSLv3
aws cloudfront update-distribution --id E29BDBENPXM1VE --distribution-config file://secure-ssl.json --if-match E35YV3CGILXQDJ
You are using the right command and it should be possible to do what you want.
However, it is slightly more complicated.
The corresponding reference page for the cli command aws cloudfront update-distribution says:
When you update a distribution, there are more required fields than when you create a distribution.
That is why you must follow the steps which are given in the cli reference [1]:
Submit a GetDistributionConfig request to get the current configuration and an Etag header for the distribution.
Update the XML document that was returned in the response to your GetDistributionConfig request to include your changes.
Submit an UpdateDistribution request to update the configuration for your distribution:
In the request body, include the XML document that you updated in Step 2. The request body must include an XML document with a DistributionConfig element.
Set the value of the HTTP If-Match header to the value of the ETag header that CloudFront returned when you submitted the GetDistributionConfig request in Step 1.
Review the response to the UpdateDistribution request to confirm that the configuration was successfully updated.
Optional: Submit a GetDistribution request to confirm that your changes have propagated. When propagation is complete, the value of Status is Deployed .
Fore info about the correct xml format is given in the CloudFront API Reference [2].
References
[1] https://docs.aws.amazon.com/cli/latest/reference/cloudfront/update-distribution.html
[2] https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html

Nginx internal dns resolve issue

I have nginx container in AWS that does reverse proxy for my website e.g. https://example.com. I have backend services that automatically register in local DNS - aws.local (this is done by AWS ECS Auto-Discovery).
The problem I have is that nginx is only resolving name to IP during start, so when service container is rebooted and gets new IP, nginx still tries old IP and I have "502 Bad Gateway" error.
Here is a code that I am running:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
include /etc/nginx/mime.types;
log_format graylog2_json '{ "timestamp": "$time_iso8601", '
'"remote_addr": "$remote_addr", '
'"body_bytes_sent": $body_bytes_sent, '
'"request_time": $request_time, '
'"response_status": $status, '
'"request": "$request", '
'"request_method": "$request_method", '
'"host": "$host",'
'"upstream_cache_status": "$upstream_cache_status",'
'"upstream_addr": "$upstream_addr",'
'"http_x_forwarded_for": "$http_x_forwarded_for",'
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
upstream service1 {
server service1.aws.local:8070;
}
upstream service2 {
server service2.aws.local:8080;
}
resolver 10.0.0.2 valid=10s;
server {
listen 443 http2 ssl;
server_name example.com;
location /main {
proxy_pass http://service1;
}
location /auth {
proxy_pass http://service2;
}
I find advices to change nginx config to resolve names per request, but then I see my browser tries to open "service2.aws.local:8070" and fails since its AWS local DNS name. I should see https://example.com/auth" on my browser.
server {
set $main service1.aws.local:2000;
set $auth service2.aws.local:8070;
location /main {
proxy_http_version 1.1;
proxy_pass http://$main;
}
location /auth {
proxy_http_version 1.1;
proxy_pass http://$auth;
}
Can you help me fixing it?
Thanks !!!
TL;DR
resolver 169.254.169.253;
set $upstream "service1.aws.local";
proxy_pass http://$upstream:8070;
Just like with ECS, I experienced the same issue when using Docker Compose.
According to six8's comment on GitHub
nginx only resolves hostnames on startup. You can use variables with
proxy_pass to get it to use the resolver for runtime lookups.
See:
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
https://www.ruby-forum.com/topic/4407628
It's quite annoying.
One of the links above provides an example
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
The resolver part is necessary. And we can't refer to the defined upstreams here.
According to Ivan Frolov's answer on StackExchange, the resolver's address should be set to 169.254.169.253
What is the TTL for your CloudMap Service Discovery records? If you do an NS lookup from the NGINX container (assuming EC2 mode and you can exec into the container) does it return the new record? Without more information, it's hard to say, but I'd venture to say this is a TTL issue and not an NGINX/Service Discovery problem.
Lower the TTL to 1 second and see if that works.
AWS CloudMap API Reference DNS Record
I found perfectly solution of this issue.
Nginx "proxy_pass" can't use "etc/hosts" information.
I wanna sugguest you use HA-Proxy reverse proxy in ECS.
I tried nginx reverse proxy, but failed. And success with HA-Proxy.
It is more simple than nginx configuration.
First, use "links" option of Docker and setting "environment variables" (eg. LINK_APP, LINK_PORT).
Second, fill this "environment variables" into haproxy.cfg.
Also, I recommend you use "dynamic port mapping" to ALB. it makes more flexible works.
taskdef.json :
# taskdef.json
{
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<APP_NAME>_ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "<APP_NAME>-rp",
"image": "gnokoheat/ecs-reverse-proxy:latest",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"links": [
"<APP_NAME>"
],
"environment": [
{
"name": "LINK_PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "LINK_APP",
"value": "<APP_NAME>"
}
]
},
{
"name": "<APP_NAME>",
"image": "<IMAGE_NAME>",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"protocol": "tcp",
"containerPort": <SERVICE_PORT>
}
],
"environment": [
{
"name": "PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "APP_NAME",
"value": "<APP_NAME>"
}
]
}
],
"requiresCompatibilities": [
"EC2"
],
"networkMode": "bridge",
"family": "<APP_NAME>"
}
haproxy.cfg :
# haproxy.cfg
global
daemon
pidfile /var/run/haproxy.pid
defaults
log global
mode http
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind *:80
http-request set-header X-Forwarded-Host %[req.hdr(Host)]
compression algo gzip
compression type text/css text/javascript text/plain application/json application/xml
default_backend app
backend app
server static "${LINK_APP}":"${LINK_PORT}"
Dockerfile(haproxy) :
FROM haproxy:1.7
USER root
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
See :
Github : https://github.com/gnokoheat/ecs-reverse-proxy
Docker image : gnokoheat/ecs-reverse-proxy:latest

Deploy Multiple full-stack React + Express applications with path routing on ELB

I currently have four containerized React + Express applications (port 3001 exposed) sitting on four individual ECS instances with four different CNAMEs. They each sit behind their own nginx service as a reverse proxy.
Given that the number of applications may increase, I'm hoping to re-deploy this on ECS with an ELB, but I'm running into problems with the path routing. My goal is to have a system where <url_name>/service-1 will route traffic to the container referenced in service-1 and so on.
Currently, the services are all in a running state, but the routing provides a bunch of console errors saying the static js and css files produced by the react build command cannot be found at <url_name>/. Has anyone found a way to run multiple full-stack React + Express applications with path routing on ELB or a workaround of adding in an Nginx service or updating the React homepage to a fixed value?
# container_definition
[
{
"name": "service-1",
"image": "<image-name>:latest",
"cpu": 256,
"memory": 256,
"portMappings": [
{
"containerPort": 3001,
"hostPort": 3001
}
],
"essential": true
}
]
# rule.json
{
"ListenerArn": "placeholder",
"Conditions": [
{
"Field": "path-pattern",
"Values": [
"/service-1*"
]
}
],
"Priority": 1,
"Actions": [
{
"Type": "forward",
"TargetGroupArn": "placeholder"
}
]
}
# server.js
const express = require('express'),
path = require('path');
const createServer = function () {
const port = 3001;
// Create a new Express server
const app = express(),
subpath = express();
// Ensure every api route is prefixed by /api
app.use('/api', subpath);
// All routes related to access points
const users = require('./routes/users');
subpath.get('/users/:id', users.getUserById);
// serve up static assets, i.e. HTML, CSS, and JS from /build
app.use(express.static('build'));
if (process.env.NODE_ENV)
app.get('*', (req, res) => res.sendFile(path.join(__dirname + '/build/index.html')));
// Start the HTTP listener using the given port
return app.listen(port, () => console.log(`Express server (HTTP) listening on port ${port}`))
};
module.exports = createServer;

Character encoding issue between logstash and logstash-forwarder

I have the following setup -
[logstash-forwarder nodes] -> [Amazon's elastic load balancer] -> [logstash nodes]
I start logstash-forwarder with the following config file -
{
"network": {
"servers": ["<Load_balancer_DNS_name>:443"],
"ssl key": "/etc/pki/private/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"timeout": 15
},
"files": [
{
"paths": [ "-" ],
"fields": { "type": "stdin" }
}
]
}
And I start logstash with following settings -
input {
tcp {
port => "7286"
codec => plain {
charset => "UTF-8"
}
}
}
output {
stdout { }
elasticsearch {
host => "<cluster_node_ip>"
protocol => "http"
}
}
Now I feed some input at the command line from logstash-forwarder just to see if it is reachable perfectly at logstash. So when I type "Hello World" or just any other plain text on the logstash-forwarder side, I receive the following on logstash node, instead of the original text -
Received an event that has a different character encoding than you configured. {:text=>"1W\\u0000\\u0000\\u0000\\u00011C\\u0000\\u0000\\u0000ox^2ta```\\u0004bV fI\\xCB\\xCCI\\u0005\\xF1uA\\x9C\\x8C\\xFC\\xE2\\u0012 -\\x90Y\\xA0kh\\xA0kha\\xAAkdh\\xACkb\\u0006\\u0014c\\xCBOK+N\\u0005\\xC92\\u001A\\x80\\x94\\xE6d\\xE6\\x81\\xF4\\t\\xBBy\\xFA9\\xFAć\\xB8\\u0006\\x87\\xC4{{:9\\xFA9\\xDAۃ\\xA4K*\\v#Ҭ\\xC5%)\\x99y\\u0000\\u0000\\u0000\\u0000\\xFF\\xFF\\u0001\\u0000\\u0000\\xFF\\xFF\\u001A\\x93\\u0015\\xA2", :expected_charset=>"UTF-8", :level=>:warn}
logstash-forwarder uses a unique protocol to communicate with logstash, named 'lumberjack'.
You need to have the key & crt on the logstash server as well, and use a lumberjack input to handle it:
input {
lumberjack {
# The port to listen on
port => 7286
# The paths to your ssl cert and key
ssl_certificate => "path/to/logstash-forwarder.crt"
ssl_key => "path/to/logstash-forwarder.key"
# Set this to whatever you want.
type => "somelogs"
}
}
What you're seeing is the encrypted lumberjack messages.
https://github.com/elasticsearch/logstash-forwarder#use-with-logstash