docker-machine shows that my ec2 docker nodes are timing out - amazon-web-services

I am experiencing this issue with my docker nodes. I have a docker swarm cluster with three nodes. I am able to SSH to each single node. However,
when I run docker-machine ls, it shows my amazonec2 drivers state as "Timeout" even though I am able to SSH.

In the Docker Swarm documentation it states the following ports should be opened:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
A timeout in AWS normally indicates that a security group is not allowing inbound access on the port(s) that are trying to be accessed.
Ensure you add these ports with a source of either the subnet range(s) of the other host(s) or by referencing a security group that is attached to the host(s).

Related

Cannot access ports in AWS ECS EC2 instance

I am running an AWS ECS service which is running a single task that has multiple containers.
Tasks are run in awsvpc network mode. (EC2, not Fargate)
Container ports are mapped in the ECS task definition.
I added inbound rules in the EC2 Container instance security group (for ex: TCP 8883 -> access from anywhere). Also in the VPC network security group.
When I try to access the ports using Public IP of the instance from my remote PC, I get connection refused.
For ex: nc -z <PublicIP> <port>
When I SSH into the EC2 instance and try netstat, I can see SSH port 22 is listening, but not the container ports (ex: 8883).
Also, when I do docker ps inside instance, Ports column is empty.
I could not figure out what configuration I missed. Kindly help.
PS: The destination (Public IP) is reachable from the remote PC. Just not from the port.
I am running an AWS ECS service which is running a single task that
has multiple containers. Tasks are run in awsvpc network mode. (EC2,
not Fargate)
Ec2, not Fargate, different horse for different courses. The task that is run against awsvpc network mode has own elastic network interface (ENI), a primary private IP address, and an internal DNS hostname. so how you will access that container with AWS EC2 public IP?
The task networking features provided by the awsvpc network mode give
Amazon ECS tasks the same networking properties as Amazon EC2
instances. When you use the awsvpc network mode in your task
definitions, every task that is launched from that task definition
gets its own elastic network interface (ENI), a primary private IP
address, and an internal DNS hostname. The task networking feature
simplifies container networking and gives you more control over how
containerized applications communicate with each other and other
services within your VPCs.
task-networking
So you need to place LB and then configure your service behind LB.
when you create any target groups for these services, you must choose
ip as the target type, not instance. This is because tasks that use
the awsvpc network mode are associated with an ENI, not with an Amazon
EC2 instance.
So something wrong with the configuration or lack of understanding between network mode. I will recommend reading this article.
when I do docker ps inside instance, Ports column is empty.
So it might be the case below if the port column is empty.
The host and awsvpc network modes offer the highest networking
performance for containers because they use the Amazon EC2 network
stack instead of the virtualized network stack provided by the bridge
mode. With the host and awsvpc network modes, exposed container ports
are mapped directly to the corresponding host port (for the host
network mode) or the attached elastic network interface port (for the
awsvpc network mode), so you cannot take advantage of dynamic host
port mappings.
Keep the following in mind:
It’s available with the latest variant of the ECS-optimized AMI. It
only affects creation of new container instances after opting into
awsvpcTrunking. It only affects tasks created with awsvpc network mode
and EC2 launch type. Tasks created with the AWS Fargate launch type
always have a dedicated network interface, no matter how many you
launch.
optimizing-amazon-ecs-task-density-using-awsvpc-network-mode

How to make a docker swarm network in a public network?

I want to create a swarm of docker nodes over machines in different networks. Say, a instance in AWS and another in GCP.
I have succesfully created a swarm between 2 GCP instances using their individual public IP addresses.
From the docs:
https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts
The following ports must be available. On some systems, these ports are open by default.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
If you plan on creating an overlay network with encryption (--opt encrypted), you also need to ensure ip protocol 50 (ESP) traffic is allowed.
Notes
Exposing these ports to the internet poses significant security risks
Performance of an overlay network between two separate data centers will suffer.

Connecting Kubernetes cluster to Redis on the the same GCP network

I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM.
However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?
kube-dns is up and running, and this k8s 1.9.4 on GCP.
I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.
$ gcloud compute firewall-rules list --filter="myservernetwork1"
myservernetwork1-icmp myservernetwork1 INGRESS 1000 icmp
myservernetwork1-tcp myservernetwork1 INGRESS 1000 tcp
myservernetwork1-udp myservernetwork1 INGRESS 1000 udp
I allowed all TCP, UDP and ICMP traffic inside the network.
I created a rule for icmp protocol for my sub-net using this command:
gcloud compute firewall-rules create myservernetwork1-icmp \
--allow icmp \
--network myservernetwork1 \
--source-ranges 10.0.0.0/8
I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.

Expose port of AWS EC2 instance to entire network

I have an app which is deployed via Docker on one of our legacy servers and want to deploy it on AWS. All instances reside on the company's private network. Private IP addresses:
My local machine: 10.0.2.15
EC2 instance: 10.110.208.142
If I run nmap 10.110.208.142 from within the Docker container, I see port 443 is open as intended. But I if run that command from another computer on the network, e.g. from my local machine, I see that port is closed.
How do I open that port to the rest of the network? In the EC2 instance, I've tried:
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
and it does not resolve the issue. I've also allowed the appropriate inbound connections on port 443 in my AWS security groups (screenshot below):
Thanks,
You cannot access EC2 instances in your AWS VPC network from your network outside of AWS using private IP addresses of the EC2 instances using the public Internet. This is why EC2 instances can have two types of IP addresses: Public and Private.
If you setup a VPN from your corporate network to your VPC then you will be able to access EC2 instances using private IP addresses. Your network and the AWS VPC network cannot have overlapping networks (at least not without fancier configurations).
You can also assign a public IP address (which can change on stop / restart) or add an Elastic IP address to your EC2 instances and then access them over the public Internet.
In either solution you will also need to configure your security groups to allow access over the desired ports.
Found the issue. I'm using nginx and nginx failed to start, which explains why port 443 appeared to be closed.
In my particular case, nginx failed because I was missing the proper ssl certificate.

Not able to access the EC2 instance added to ECS cluster

I have created the ECS cluster, which as of now contains only one EC2 instance.
I have deployed my node application using docker.
I tried accessing my application running on 3000 port on this EC2 instance using its public IP address. But somehow I am not getting response.
I tried to ping this IP, I get the response back. Same docker container is working fine on other instance.
You must map the container port to your ec2 instance port. first open the port number you want to access with public address by ec2 security group
First, open ec2 instance port:
in EC2 console > click in your instance > security groups
Then open a port in inbound and outbound settings.
Next, map this port with your container port (3000):
ECS > Task Definitions > your task > your container > mapping ports
Set host port: port opened, container port: 3000, protocol: tcp
Okay, it's hard to tell, but since this is probably access issue, you can try the followings.
Check if port 3000 is open in the Security Group that tied to to the ECS instance.
SSH into the EC2 instance, and check if your node app can be access via port 3000. You need to enable the SSH in the Security Group for the EC2 for this.
The new ECS support dynamic port mapping, so make sure your task definition is configured to use port 3000.
That should help you narrow down where the real issue is.