Stunnel for Elasticchache Redis(cluster mode enabled) - amazon-web-services

I have spin up Elasticcache Redis cluster mode enabled cluster on AWS. I am having 3 master shards and 1 replica for each(total 3 replicas). I have turn on in-transit encryption. For this I have installed stunnel on my EC2 instance and my config file looks like. 3001 is my cluster port
[redis-cli]
client = yes
accept = 127.0.0.1:3001
connect = master1_url:3001
[redis-cli1]
client = yes
accept = 127.0.0.1:3002
connect = replica1_url.com:3001
[redis-cli2]
client = yes
accept = 127.0.0.1:3003
connect = master2_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3004
connect = replica2_url.com:3001
[redis-cli4]
client = yes
accept = 127.0.0.1:3005
connect = master3_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3006
connect = replica3_url.com:3001
----------------------------------------------
sudo netstat -tulnp | grep -i stunnel
tcp 0 0 127.0.0.1:3001 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3002 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3003 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3004 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3005 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3006 0.0.0.0:* LISTEN 32272/stunnel
When I connect using localhost (src/redis-cli -c -h localhost -p 3001)my connection is successful. But when I hit "get key" it stuck with following
localhost:3001> get key
-> Redirected to slot [12539] located at master3_url:3001
If I change the cluster to single shard and single replica all works fine. What setting I am missing when using multi shards cluster. My Ec2 instance is open to accept connection on all ports. Redis Cluster is open to accept connection from Ec2 instance on all ports.
This is my first question on stackoverflow :)

Related

Mount local directory to AWS ECS Fargate Container

Goal
The goal is to mount a local version of a Github repository to an ECS Fargate container.
Background
Local Machine: MacOs Catalina v10.15.7
The container is configured to use the ECS Exec functionality so that IAM users can interactively run integration tests via the ECS Fargate container. The ECS service isn't constraint to using Fargate and can use EC2 if needed to solve this problem.
Here's the associated VPC configurations:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = local.mut_id
cidr = "10.0.0.0/16"
azs = ["us-west-2a", "us-west-2b", "us-west-2c", "us-west-2d"]
enable_dns_hostnames = true
public_subnets = local.public_subnets
database_subnets = local.database_subnets
}
Attempts
Mount an AWS EFS to the local machine directory then mount EFS onto the ECS container. As a prerequisite to mounting to the EFS, I created a client VPN endpoint associated within the VPC subnet that the ECS and EFS resources are hosted in.
EFS configurations:
resource "aws_efs_file_system" "testing" {
creation_token = local.mut_id
}
resource "aws_efs_mount_target" "testing" {
file_system_id = aws_efs_file_system.testing.id
subnet_id = module.vpc.public_subnets[0]
security_groups = [aws_security_group.testing_efs.id]
}
data "aws_iam_policy_document" "testing_efs" {
statement {
effect = "Allow"
actions = [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:ClientRootAccess"
]
principals {
type = "AWS"
identifiers = [data.aws_caller_identity.current.arn]
}
}
}
resource "aws_security_group" "testing_efs" {
name = "${local.mut_id}-integration-testing-efs"
description = "Allows inbound access from testing container"
vpc_id = module.vpc.vpc_id
ingress {
description = "Allows EFS mount point access from testing container"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["${chomp(data.http.my_ip.body)}/32"]
}
}
data "http" "my_ip" {
url = "http://ipv4.icanhazip.com"
}
VPN config:
module "testing_vpn" {
source = "DNXLabs/client-vpn/aws"
version = "0.3.0"
cidr = "172.31.0.0/16"
name = local.mut_id
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.public_subnets
security_group_id = aws_security_group.testing_vpn.id
split_tunnel = true
}
resource "aws_security_group" "testing_vpn" {
name = "${local.mut_id}-integration-testing-vpn"
description = "Allows inbound VPN connection"
vpc_id = module.vpc.vpc_id
ingress {
description = "Inbound VPN connection"
from_port = 443
protocol = "UDP"
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
protocol = "-1"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
Here's the related commands used to start the VPN and mount the EFS to the local directory:
client_cfg=$(aws ec2 export-client-vpn-client-configuration --client-vpn-endpoint-id "$vpn_endpoint" --output text)
client_cfg=$(cat <<EOT >> $PWD/client.ovpn
$client_cfg
<cert>
$vpn_cert
</cert>
<key>
$vpn_private_key
</key>
EOT
)
echo "Running OpenVPN"
sudo openvpn --config "$PWD/client.ovpn" --daemon
echo "Mounting to EFS: $local_mount"
mount -t nfs -o nfsvers=4,rsize=8192,wsize=8192,hard,timeo=600,retrans=2,noresvport "$efs_dns":/ "$local_mount"
The command: mount -t nfs -o nfsvers=4,rsize=8192,wsize=8192,hard,timeo=600,retrans=2,noresvport "$efs_dns":/ "$local_mount" results in the error even though the directory exists:
mount: realpath <$local_mount>: No such file or directory
I've also tried running sudo nfsd restart and using the IP address of the EFS mount target like so: mount -t nfs -o nfsvers=4,rsize=8192,wsize=8192,hard,timeo=600,retrans=2,noresvport x.x.x.x:/ "$local_mount" which results in a timeout error.
OpenVPN version: 2.5.5
rpcinfo -p:
100000 2 udp 111 rpcbind
100000 3 udp 111 rpcbind
100000 4 udp 111 rpcbind
100000 2 tcp 111 rpcbind
100000 3 tcp 111 rpcbind
100000 4 tcp 111 rpcbind
100024 1 udp 1011 status
100024 1 tcp 1021 status
100021 0 udp 946 nlockmgr
100021 1 udp 946 nlockmgr
100021 3 udp 946 nlockmgr
100021 4 udp 946 nlockmgr
100021 0 tcp 1017 nlockmgr
100021 1 tcp 1017 nlockmgr
100021 3 tcp 1017 nlockmgr
100021 4 tcp 1017 nlockmgr
100011 1 udp 648 rquotad
100011 2 udp 648 rquotad
100011 1 tcp 937 rquotad
100011 2 tcp 937 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 928 mountd
100005 3 udp 928 mountd
100005 1 tcp 929 mountd
100005 3 tcp 929 mountd

How to connect to an EKS service from outside the cluster using a LoadBalancer in a private VPC

I am trying to expose An EKS deployment of Kafka outside the cluster, within the same VPC.
In terraform I added an ingress rule for the Kafka security group:
ingress {
from_port = 9092
protocol = "tcp"
to_port = 9092
cidr_blocks = [
"10.0.0.0/16",
]
}
This is the service yaml
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-0....d,sg-0db....ae"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9092
targetPort: 9092
selector:
app: kafka
When trying to connect from another instance, belonging to one of the security groups in the yaml,
I seem to be able to establish a connection through the load balancer but not get referred to Kafka:
[ec2-user#ip-10-0-4-47 kafkacat]$ nc -zvw10 internal-a08....628f-1654182718.us-east-2.elb.amazonaws.com 9092
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.0.3.151:9092.
Ncat: 0 bytes sent, 0 bytes received in 0.05 seconds.
[ec2-user#ip-10-0-4-47 kafkacat]$ nmap -Pn internal-a0837....a0e628f-1654182718.us-east-2.elb.amazonaws.com -p 9092
Starting Nmap 6.40 ( http://nmap.org ) at 2021-02-28 07:19 UTC
Nmap scan report for internal-a083747ab.....8f-1654182718.us-east-2.elb.amazonaws.com (10.0.2.41)
Host is up (0.00088s latency).
Other addresses for internal-a083747ab....36f0a0e628f-1654182718.us-east-2.elb.amazonaws.com (not scanned): 10.0.3.151 10.0.1.85
rDNS record for 10.0.2.41: ip-10-0-2-41.us-east-2.compute.internal
PORT STATE SERVICE
9092/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds
[ec2-user#ip-10-0-4-47 kafkacat]$ kafkacat -b internal-a083747abf4....-1654182718.us-east-2.elb.amazonaws.com:9092 -t models
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Local: Host resolution failure: kafka-2.broker.kafka.svc.cluster.local:9092/2: Failed to resolve 'kafka-2.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-1.broker.kafka.svc.cluster.local:9092/1: Failed to resolve 'kafka-1.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-0.broker.kafka.svc.cluster.local:9092/0: Failed to resolve 'kafka-0.broker.kafka.svc.cluster.local:9092': Name or service not known
^C[ec2-user#ip-10-0-4-47 kafkacat]$
``
We solved the Kafka connection by:
Adding ingress rule to the Kafka worker security group (We use Terraform)
ingress {
from_port = 9094
protocol = "tcp"
to_port = 9094
cidr_blocks = [
"10.0.0.0/16",
]
}
Provisioning each broker a load balancer service in Kubernetes YAML (note that the last digit in the nodePort corresponds to the broker stateful set ID).
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external-0
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: sg-....d,sg-0db14....e,sg-001ce.....e,sg-0fe....15d13c
spec:
type: LoadBalancer
ports:
-
protocol: TCP
targetPort: 9094
port: 32400
nodePort: 32400
selector:
app: kafka
kafka-broker-id: "0"
Retrieving load balancer name by parsing kubctl -n kafka get svc bootstrap-external-0.
Adding DNS name by convention using Route 53.
We plan to automate by terraforming the Route53 after load balancer is created.

connection issue with hazelcast on amazon AWS

I am using Hazelcast v3.6 on two amazon AWS virtual machines (not using the AWS specific settings for hazelcast). The connection is supposed to work via TCP/IP connection settings (not multicasting). I have opened 5701-5801 address for connection on the virtual machines.
I have tried using iperf on the two virtual machines using which I can see that the client on one VM connects to the server on another VM (and vice versa when I switch the client server setup for iperf).
When I launch two Hazelcast servers on different VM's, the connection is not established. The log statements and the hazelcast.xml config are given below (I am not using the programmatic settings for Hazelcast). I have changed the IP addresses below:
20160401-16:41:02.812 [cached2] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5701, timeout: 0, bind-any: true
20160401-16:41:02.812 [cached3] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5703, timeout: 0, bind-any: true
20160401-16:41:02.813 [cached1] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5702, timeout: 0, bind-any: true
20160401-16:41:02.816 [cached1] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Could not connect to: /22.23.24.25:5702. Reason: SocketException[Connection refused to address /22.23.24.25:570
2]
20160401-16:41:02.816 [cached1] TcpIpJoiner INFO - [45.46.47.48]:5701 [dev] [3.6] Address[22.23.24.25]:5702 is added to the blacklist.
20160401-16:41:02.817 [cached3] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Could not connect to: /22.23.24.25:5703. Reason: SocketException[Connection refused to address /22.23.24.25:570
3]
20160401-16:41:02.817 [cached3] TcpIpJoiner INFO - [45.46.47.48]:5701 [dev] [3.6] Address[22.23.24.25]:5703 is added to the blacklist.
20160401-16:41:02.834 [cached2] TcpIpConnectionManager INFO - [45.46.47.48]:5701 [dev] [3.6] Established socket connection between /45.46.47.48:51965 and /22.23.24.25:5701
20160401-16:41:02.849 [hz._hzInstance_1_dev.IO.thread-in-0] TcpIpConnection INFO - [45.46.47.48]:5701 [dev] [3.6] Connection [Address[22.23.24.25]:5701] lost. Reason: java.io.EOFException[Remote socket
closed!]
20160401-16:41:02.851 [hz._hzInstance_1_dev.IO.thread-in-0] NonBlockingSocketReader WARN - [45.46.47.48]:5701 [dev] [3.6] hz._hzInstance_1_dev.IO.thread-in-0 Closing socket to endpoint Address[54.89.161.2
28]:5701, Cause:java.io.EOFException: Remote socket closed!
20160401-16:41:03.692 [cached2] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5701, timeout: 0, bind-any: true
20160401-16:41:03.693 [cached2] TcpIpConnectionManager INFO - [45.46.47.48]:5701 [dev] [3.6] Established socket connection between /45.46.47.48:60733 and /22.23.24.25:5701
20160401-16:41:03.696 [hz._hzInstance_1_dev.IO.thread-in-1] TcpIpConnection INFO - [45.46.47.48]:5701 [dev] [3.6] Connection [Address[22.23.24.25]:5701] lost. Reason: java.io.EOFException[Remote socket
closed!]
Part of Hazelcast config
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>abc</name>
<password>defg</password>
</group>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<member>22.23.24.25</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>45.46.47.48</interface>
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
<partition-group enabled="false"/>
iperf server and client log statements
Server listening on TCP port 5701
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 22.23.24.25, TCP port 5701
TCP window size: 1.33 MByte (default)
------------------------------------------------------------
[ 5] local 172.31.17.104 port 57398 connected with 22.23.24.25 port 5701
[ 4] local 172.31.17.104 port 5701 connected with 22.23.24.25 port 55589
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 662 MBytes 555 Mbits/sec
[ 4] 0.0-10.0 sec 797 MBytes 666 Mbits/sec
Server listening on TCP port 5701
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local xxx.xx.xxx.xx port 5701 connected with 22.23.24.25 port 57398
------------------------------------------------------------
Client connecting to 22.23.24.25, TCP port 5701
TCP window size: 1.62 MByte (default)
------------------------------------------------------------
[ 6] local 172.31.17.23 port 55589 connected with 22.23.24.25 port 5701
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 797 MBytes 669 Mbits/sec
[ 4] 0.0-10.0 sec 662 MBytes 553 Mbits/sec
Note:
I forgot to mention that I can connect from hazelcast client to server i.e. when I use a hazelcast client to connect to a single hazlecast server node, I am able to connect just fine
An outbound ports range which includes 0 is interpreted by hazelcast as "use ephemeral ports", so the <outbound-ports> element has actually no effect in your configuration. There is an associated test in hazelcast sources: https://github.com/hazelcast/hazelcast/blob/75251c4f01d131a9624fc3d0c4190de5cdf7d93a/hazelcast/src/test/java/com/hazelcast/nio/NodeIOServiceTest.java#L60

Kubernetes manifest apiserver, no forwarding?

I am working on building a kubernetes cluster on AWS using terraform, by reverse engineering the kube-aws script here:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
However when it is created, the kube-apiserver pod does not forward 443 to the host, so the api cannot be reached (it does forward 8080 to 127.0.0.1)
The manifest in question:
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: gcr.io/google_containers/hyperkube:${K8S_VER}
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd_servers=${ETCD_ENDPOINTS}
- --allow-privileged=true
- --service-cluster-ip-range=${SERVICE_IP_RANGE}
- --secure_port=443
- --advertise-address=${ADVERTISE_IP}
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --cloud-provider=aws
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
Some output:
ip-10-0-0-50 core # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47d36516ada9 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube apiserve 18 minutes ago Up 18 minutes k8s_kube-apiserver.daa12bc1_kube-apiserver-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_0ff7c6642d467da6eec9af9d96af0622_b88e9ada
48f85774ff5c gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube schedule 38 minutes ago Up 38 minutes k8s_kube-scheduler.cca58e1_kube-scheduler-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8aa2dd5e26e716aa54d97e2691e100e0_d6865ecb
1242789081a9 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube controll 38 minutes ago Up 38 minutes k8s_kube-controller-manager.9ddfd2a0_kube-controller-manager-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_66bae8c21c0937cc285af054be236103_16b6bfb9
2ebafb2a3413 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube proxy -- 38 minutes ago Up 38 minutes k8s_kube-proxy.de5c3084_kube-proxy-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_e6965a2424ca55206c44b02ad95f479e_dacdc559
ade9cd54f391 gcr.io/google_containers/pause:0.8.0 "/pause" 38 minutes ago Up 38 minutes k8s_POD.e4cc795_kube-scheduler-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8aa2dd5e26e716aa54d97e2691e100e0_b72b8dba
78633207462f gcr.io/google_containers/pause:0.8.0 "/pause" 38 minutes ago Up 38 minutes k8s_POD.e4cc795_kube-controller-manager-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_66bae8c21c0937cc285af054be236103_71057c93
b97643a86f51 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 39 minutes ago Up 39 minutes k8s_controller-manager-elector.663462cc_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_0bb98126
0859c891679e gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 39 minutes ago Up 39 minutes k8s_scheduler-elector.468957a0_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_fe401f47
e948e718f3d8 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-apiserver-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_0ff7c6642d467da6eec9af9d96af0622_774d1393
eac6b18c0900 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_949f1945
6411aed07d40 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-proxy-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_e6965a2424ca55206c44b02ad95f479e_160a3b0f
ip-10-0-0-50 core # netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 1818/hyperkube
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 7966/hyperkube
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1335/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1800/hyperkube
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 1820/hyperkube
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 610/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1335/kubelet
tcp6 0 0 :::22 :::* LISTEN 1/systemd
tcp6 0 0 :::55447 :::* LISTEN 1800/hyperkube
tcp6 0 0 :::42274 :::* LISTEN 1800/hyperkube
tcp6 0 0 :::10250 :::* LISTEN 1335/kubelet
tcp6 0 0 :::5355 :::* LISTEN 610/systemd-resolve
udp 0 0 10.0.0.50:68 0.0.0.0:* 576/systemd-network
udp 0 0 0.0.0.0:8285 0.0.0.0:* 1456/flanneld
udp 0 0 0.0.0.0:5355 0.0.0.0:* 610/systemd-resolve
udp6 0 0 :::5355 :::* 610/systemd-resolve
udp6 0 0 :::52627 :::* 1800/
ip-10-0-0-50 core # docker logs 47d36516ada9
I1127 23:47:15.421827 1 aws.go:489] Zone not specified in configuration file; querying AWS metadata service
I1127 23:47:15.523047 1 aws.go:595] AWS cloud filtering on tags: map[KubernetesCluster:kubernetes]
I1127 23:47:15.692595 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
[restful] 2015/11/27 23:47:15 log.go:30: [restful/swagger] listing is available at https://10.0.0.50:443/swaggerapi/
[restful] 2015/11/27 23:47:15 log.go:30: [restful/swagger] https://10.0.0.50:443/swaggerui/ is mapped to folder /swagger-ui/
E1127 23:47:15.718842 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719005 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719150 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719307 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719457 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719506 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
I1127 23:47:15.767717 1 server.go:441] Serving securely on 0.0.0.0:443
I1127 23:47:15.767796 1 server.go:483] Serving insecurely on 127.0.0.1:8080
So it immediately occurred to me to check the certificates that I was using after posting this (Rubberduck ftw.)
Turns out I was merely passing the wrong file to the tls-cert-file= argument.
After correcting it to the right one , everything started working straight away!

awk input file has space space in quotes

Good day.
Trying to learn some AWK to convert some Juniper firewall configs to Cisco or Palo configs. Part of that is to parse the configuration. I have a sample here:
set service "RDP" protocol tcp src-port 0-65535 dst-port 3389-3389
set service "LDAPS" protocol tcp src-port 0-65535 dst-port 636-636
set service "SOAPS" protocol tcp src-port 0-65535 dst-port 444-444
set service "KEYS-ADMIN" protocol tcp src-port 0-65535 dst-port 9000-9000
set service "WSUS-MDM" protocol tcp src-port 0-65535 dst-port 8530-8530
set service "WSUS-MDM" + tcp src-port 0-65535 dst-port 8531-8531
set service "WSUS-MDM" + tcp src-port 0-65535 dst-port 8531-8531
set service "HTTPS-MDM" protocol tcp src-port 0-65535 dst-port 8443-8443
set service "IPSEC - 4500" protocol udp src-port 0-65535 dst-port 4500-4500
set service "IPSEC - 4500" + tcp src-port 0-65535 dst-port 1433-1433
set service "IPSEC - 4500" + tcp src-port 0-65535 dst-port 1433-1433
set service "OKFTP" protocol tcp src-port 0-65535 dst-port 2169-2169
set service "Bomgar 8200" protocol tcp src-port 0-65535 dst-port 8200-8200
set service "Cisco VPN" protocol tcp src-port 0-65535 dst-port 10000-10000
set service "Cisco VPN 2" protocol tcp src-port 0-65535 dst-port 10000-10000
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 10000-10000
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 500-500
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 4500-4500
set service "Cisco VPN 2" + 50 src-port 0-65535 dst-port 0-65535
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 10000-10000
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 500-500
set service "Cisco VPN 2" + udp src-port 0-65535 dst-port 4500-4500
set service "TrendMicro8080" protocol tcp src-port 0-65535 dst-port 8080-8080
set service "TrendMicro26980" protocol tcp src-port 0-65535 dst-port 26980-26980
set service "TrendMicro26980" + udp src-port 0-65535 dst-port 26980-26980
set service "PenPal Test" protocol tcp src-port 0-65535 dst-port 522-522
set service "HTTP8080" protocol tcp src-port 0-65535 dst-port 8080-8080
set service "HTTPS445" protocol tcp src-port 0-65535 dst-port 445-445
set service "MOBILEIRON-TLS" protocol tcp src-port 0-65535 dst-port 9997-9997
set service "MOBILEIRON-TLS" + tcp src-port 0-65535 dst-port 9998-9998
I saved this snippet of lines to a file named test1 and ran this command from the command line:
awk -F " " 'BEGIN {OFS=","} {print $3,$5,$7,$9}' test1
Although it MOSTLY worked out, the spaces contained inside the " " where seen by awk as valid spaces. The output:
"RDP",tcp,0-65535,3389-3389
"LDAPS",tcp,0-65535,636-636
"SOAPS",tcp,0-65535,444-444
"KEYS-ADMIN",tcp,0-65535,9000-9000
"WSUS-MDM",tcp,0-65535,8530-8530
"WSUS-MDM",tcp,0-65535,8531-8531
"WSUS-MDM",tcp,0-65535,8531-8531
"HTTPS-MDM",tcp,0-65535,8443-8443
"IPSEC,4500",udp,0-65535
"IPSEC,4500",tcp,0-65535
"IPSEC,4500",tcp,0-65535
"OKFTP",tcp,0-65535,2169-2169
"Bomgar,protocol,src-port,dst-port
"Cisco,protocol,src-port,dst-port
"Cisco,2",tcp,0-65535
"Cisco,2",udp,0-65535
"Cisco,2",udp,0-65535
"Cisco,2",udp,0-65535
"Cisco,2",50,0-65535
"Cisco,2",udp,0-65535
"Cisco,2",udp,0-65535
"Cisco,2",udp,0-65535
Ideally, I would like to have awk ignore the spaces in the " ". I guess I could add it as a regular expression? Do I use the '!' somehow? Not sure. Any help would be appreciated.
There are likely many ways to achieve your end result (maybe even one that is awk inclusive):
awk -F\" 'BEGIN {OFS=","} {split($3,F," ");print $2,F[2],F[4],F[6]}' test1
Another way that is possible is to use sed:
sed 's/\("[^"]*"\)* \("[^"]*"\)*/\1,\2/g' test1
...or piped to awk:
sed 's/\("[^"]*"\)* \("[^"]*"\)*/\1,\2/g' test1 | awk -F ',' 'BEGIN {OFS=","} {print $3,$5,$7,$9}'
Output:
"RDP",tcp,0-65535,3389-3389
"LDAPS",tcp,0-65535,636-636
"SOAPS",tcp,0-65535,444-444
"KEYS-ADMIN",tcp,0-65535,9000-9000
"WSUS-MDM",tcp,0-65535,8530-8530
"WSUS-MDM",tcp,0-65535,8531-8531
"WSUS-MDM",tcp,0-65535,8531-8531
"HTTPS-MDM",tcp,0-65535,8443-8443
"IPSEC - 4500",udp,0-65535,4500-4500
"IPSEC - 4500",tcp,0-65535,1433-1433
"IPSEC - 4500",tcp,0-65535,1433-1433
"OKFTP",tcp,0-65535,2169-2169
"Bomgar 8200",tcp,0-65535,8200-8200
"Cisco VPN",tcp,0-65535,10000-10000
"Cisco VPN 2",tcp,0-65535,10000-10000
"Cisco VPN 2",udp,0-65535,10000-10000
"Cisco VPN 2",udp,0-65535,500-500
...
The awk solution was discovered after learning of this excellent example.