create docker swarm on AWS - amazon-web-services

I created 3 ec2 instances and now I want to create a docker swarm
the ec2 instances have a security group with
TCP 2377 0.0.0.0/0
TCP 7946 0.0.0.0/0
TCP 8501 0.0.0.0/0
UDP 4789 0.0.0.0/0
UDP 7946 0.0.0.0/0
SSH 7946 0.0.0.0/0
HTTPS 443 0.0.0.0/0
HTTP 80 0.0.0.0/0
open
after creating the resources I run the following code
docker swarm init --advertise-addr ManagerIP
and then on the other instance I past the join command
I then create a network and 2 services
docker network create --driver overlay mydrupal
docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=postgres postgres:11
docker service create --name drupal --network mydrupal -p 80:80 drupal:8
all this at the moment is running either on host1 (the swarm leader) or host1 and host2 (one of the workers)
If I then go to the browser and paste the public ip of the ec2 and trying to configure the postgres database init I get an error or I can get in and then I cannot connect from the other instances...
I am not sure if this is a security group issue or something else.
update
If I remove the workers from the swarm I can configure the database and run drupal
If then I add the workers to the swarm I can't connect using their public ips
update2
opened all traffic, all ports on the security group and still nothing..

Related

Helm install of Istio ingress gateway in EKS cluster opens two node ports to the world

When I install the Istio ingress gateway with Helm,
helm install -f helm/values.yaml istio-ingressgateway istio/gateway -n default --wait
I see several inbound rules opened in security group associated with the nodes in the cluster. Two of these rules open traffic to the world,
Custom TCP TCP 30111 0.0.0.0/0 Custom TCP TCP 31760 0.0.0.0/0
Does anyone know what services run on these ports and whether they need to be opened to the world?
I didn't expect to have ports on the compute nodes opened to public access.

Amazon RDS PostgreSQL "psql: error: could not connect to server: could not connect to server: Operation timed out"

I’m at a loss for how to get connected to my Amazon PostgreSQL database instance. I’m sure I’m doing something wrong, and none of the other Stack Overflow threads have helped. This is an RDS DB Instance in a VPC Accessed by an EC2 Instance in the same VPC. Here's what I did.
Created a custom EC2 VPC security group, specified my default VPC:
aws ec2 create-security-group --group-name rds-us-east-databases --vpc-id vpc-1234567 --description "RDS databases in the us-east-1 region"
Create an RDS database subnet group for my default subnets:
aws rds create-db-subnet-group --db-subnet-group-name rds-us-east-databases --db-subnet-group-description "RDS databases" --subnet-ids '["subnet-97xxx9","subnet-e1xxxxdf","subnet-a62xxx9","subnet-baxxxxdc","subnet-axxxxe2","subnet-11xxxx0"]'
Added CIDR traffic rules for my EC2 VPC security group**:
aws ec2 authorize-security-group-ingress --group-name rds-us-east-databases --protocol tcp --port 5432 --cidr 160.3.6.253/32
aws ec2 authorize-security-group-ingress --group-name rds-us-east-databases --protocol tcp --port 5432 --cidr 192.168.1.0/24
aws ec2 authorize-security-group-ingress --group-name rds-us-east-databases --protocol tcp --port 5432 --cidr 10.0.0.0/16
**Here I was testing traffic rules.
Created my PostgreSQL database and specified the EC2 VPC security group ID:
aws rds create-db-instance --db-name DBName --db-instance-identifier MyDbNameID --allocated-storage 20 --db-instance-class db.t2.micro --engine Postgres --master-username postgres --master-user-password MyDatabasePassword1 --vpc-security-group-ids '["sg-02xxxxxxxxxxxx90c"]' --db-subnet-group-name rds-us-east-databases --backup-retention-period 3 --port 5432 --no-publicly-accessible --enable-iam-database-authentication --region us-east-1 --max-allocated-storage 100
Tried to connect using psql.
psql --host=blah.blah.us-east-1.rds.amazonaws.com --port=5432 --username=postgres --password --dbname=DBName
Error:
psql: error: could not connect to server: could not connect to server: Operation timed out
Is the server running on host “blah.blah.rds.amazonaws.com" (IP address) and accepting
TCP/IP connections on port 5432?
Note:
All of the above is on/using the same VPC.
I experienced the same issue a few months ago for another RDS PostgreSQL database and I was/am able to get connected. Though, I don't remember what I did to fix it. So the fault is not with the default VPC.
Also confirmed that I have remote access setup in postgresql.conf, and since I can connect to the other RDS DB, doesn't seem like it would be a local PostgreSQL issue.
Also confirmed it doesn't have anything to do with public accessibility or not. Tried making it publicly accessible and got a FATAL error, switched back.
Other successful connection to RDS DB shown here:
psql --host=database-2.otherdbendpoint.us-east-1.rds.amazonaws.com --port=5432 --username=postgres --password --dbname=myotherdatabase
Password:
psql (12.3, server 11.6)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
myotherdatabase=>
Update: all of the ingress rules for my VPC security group '["sg-02xxxxxxxxxxxx90c"]':
PostgreSQL TCP 5432 111.1.1.111/32 MY IP ADDRESS
PostgreSQL TCP 5432 192.168.1.0/24 -
PostgreSQL TCP 5432 0.0.0.0/0 -
PostgreSQL TCP 5432 10.0.0.0/16 -
PostgreSQL TCP 5432 10.0.0.0/24 -
All traffic All All 5432::/16 -
Update: my outgress rules for my VPC security group '["sg-02xxxxxxxxxxxx90c"]':
All traffic All All 0.0.0.0/0 -
It looks like you've confirmed your RDS instance's security group permits traffic on the appropriate port from any IP.
Check the subnets you have associated with your instance, and ensure that the route tables direct public traffic to an internet gateway.
0.0.0.0/16 -> igw-xxxxxxxxxxxxx

Connecting Kubernetes cluster to Redis on the the same GCP network

I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM.
However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?
kube-dns is up and running, and this k8s 1.9.4 on GCP.
I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.
$ gcloud compute firewall-rules list --filter="myservernetwork1"
myservernetwork1-icmp myservernetwork1 INGRESS 1000 icmp
myservernetwork1-tcp myservernetwork1 INGRESS 1000 tcp
myservernetwork1-udp myservernetwork1 INGRESS 1000 udp
I allowed all TCP, UDP and ICMP traffic inside the network.
I created a rule for icmp protocol for my sub-net using this command:
gcloud compute firewall-rules create myservernetwork1-icmp \
--allow icmp \
--network myservernetwork1 \
--source-ranges 10.0.0.0/8
I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.

How to connect Amazon Elasticache with Redsmin Redis GUI

AWS Elasticache currently does not allow IP-range based access control. Therefore I don't know how to connect AWS ElastiCache cluster to Redsmin Redis GUI.
To connect your AWS ElastiCache cluster to Redsmin you will need to add two IPTables rules to one of your EC2 instance so it will be able to act as a proxy.
There are two scenario:
1 - If you have an EC2 instance in the same subnet as your Redis Elasticache
Note:
This will only work if the EC2 instance you connect to is in the same subnet as your Elasticache Redis instance.
The following example will state that your Elasticache private IP is 172.31.5.13 and is running on port 6379.
The following example will state that your EC2 private IP is 172.31.5.14 and its public IP is 52.50.145.87.
Now:
Connect to your EC2 instance through SSH
Then run (don't forget to change 172.31.5.13:6379 by the ElastiCache IP and port number):
sudo iptables -t nat -A PREROUTING -p tcp --dport 6379 -j DNAT --to-destination 172.31.5.13:6379
Then:
run:sudo iptables -t nat -A POSTROUTING -p tcp -d 172.31.5.13 --dport 6379 -j SNAT --to-source 172.31.5.14
sudo service iptables save
Again don't forget to change 172.31.5.14 with your local EC2 server private IP. Same goes for 172.31.5.13 and 6379, replace them your Elasticache IP and port number.
Add a rule in the security group to allow inbound request from Redsmin IP 62.210.222.165, protocol=TCP, port=6379
Add a new Direct Server in Redsmin with the connection string: redis://52.50.145.87:6379, done!
If you have any issue or questions with the above steps, don't hesitate, contact us, we are happy to help!
2 - If you don't have an EC2 instance in the same subnet as your Redis ElastiCache
Follow this Amazon tutorial to setup a NAT instance, be sure to setup it on the same subnet as your ElastiCache server. Now follow the steps from the section above.

AWS Redis connect from outside

Is there a way to connect Redis instance hosted on AWS from outside AWS network? I have one Windows based EC2 instance running on AWS and another one is Redis cache node.
I know this question has been asked but the answer is in context of Linux based system, but mine is Windows based server on AWS. I don't have enough score to post comments on existing questions. Here is the link to existing question on Stack Overflow:
Can you connect to Amazon Elasticache Redis outside of Amazon
Steps to access Elasticache Redis from outside of AWS.
1) Create an EC2 instance in same VPC as elasticache redis but the public subnet. Make sure that IP forwarding is enabled:
cat /proc/sys/net/ipv4/ip_forward
value ip_forward=1 indicates that forwarding is enabled
Make sure Masquerading is enabled:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
2) Create security Group with Inbound connection on port that you intend to forward ( lets say 6379 in this case). Specify the source CIDR block for the incoming connection. Ensure that the outbound rule allows connection to the redis cluster on desired port(default redis port is 6379)
3) Add IP table rule to allow forwarding rule from EC2 instance to elasticache
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11211 -j DNAT --to :6379
source