Does anyone tried to configure AWS Application load balancing to docker swarm running on EC2 instances not on EC2 CS, because most documentation shows only Docker for AWS, I saw some post that you must include the ARN on the label but I think it's still not working. Also, the DNS on the load balancer does not show the nginx even though port 80 is already allowed on our security group
This is the command I used when running the services,
docker service create --name=test --publish 80:80 --publish 444:80 --constraint 'engine.labels.serverType == dev' --replicas=2 --label com.docker.aws.lb.arn="<arn-value-here>" nginx:alpine
Current Setup:
EC2 instance
Subnet included on the loadbalancer
Any insights will be much appreciated.
Related
I am using Jenkins Fargate Plugin(https://plugins.jenkins.io/amazon-ecs/) for builds and push. I have an EC2 machine and in this machine I have Jenkins master, nexus repository and sonarqube. And with this jenkins fargate plugin I create fargate containers for jenkins workers. And this workers in same subnet in EC2 machine and same vpc. But when I use whistlist on 443 port for nexus and sonarqube created fargate container cant access to nexus and sonarqube but they are on same public subnet. What should I do for the connection. I use different security groups for EC2 machine and fargate conrtainers but subnets and vpc is same.
I need to close jenkins master nexus and sonarqube login pages so ı need to use whistlist right other way can close? what should I do for comminication fargate container and EC2 machine?
Update:
Subnet is public subnet.
Security group for fargate outbound rules is all open.
The error is "Connection time out".
I am trying to achieve the following network topology. I want the EC2 instances in private subnets to receive http traffic on port 80 from application load balancer.
For that
I have launched EC2 instances in both the private subnets each. Also, installed apache web server with index.html using the following user data script.
#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo “Hello World from $(hostname -f)” > /var/www/html/index.html
Next, I created ALB in the public subnets. Also, registered EC2 instances with a Target Group while creating the ALB. But health checks for the registered EC2 instances always fail. Please find the image below.
I have double checked security groups for EC2 instances and ALB. Both looks fine to me. Could anyone please let me know what am I missing here ?
thanks
So I've been struggling with the fact that I'm unable to expose any deployment in my eks cluster.
I got down to this:
My LoadBalancer service public IP never responds
Went to the load balancer section in my aws console
Load balancer is no working because my cluster node is not passing the heath checks
SSHd to my cluster node and found out that containers do not have ports associated to them:
This makes the cluster node fail the health checks, so no traffic is forwarded that way.
I tried running a simple nginx container manually, without kubectl directly in my cluster node:
docker run -p 80:80 nginx
and pasting the node public IP in my browser. No luck:
then I tried curling to the nginx container directly from the cluster node via ssh:
curl localhost
And I'm getting this response: "curl: (7) Failed to connect to localhost port 80: Connection refused"
Why are containers in the cluster node not showing ports?
How can I make the cluster node pass the load balancer health checks?
Could it have something to do with the fact that I created a single node cluster with eksctl?
What other options do I have to easily run a kubernetes cluster in AWS?
This is something in the middle between answer and question, but I hope it will help you.
Im using Deploying a Kubernetes Cluster with Amazon EKS guide for years when it comes to create EKS cluster.
For test purposes, I just spinned up new cluster and it works as expected, including accessing test application using external LB ip and passing health checks...
In short you need:
1. create EKS role
2. create VPC to use in EKS
3. create stack (Cloudformation) from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml
4. Export variables to simplify further cli command usage
export EKS_CLUSTER_REGION=
export EKS_CLUSTER_NAME=
export EKS_ROLE_ARN=
export EKS_SUBNETS_ID=
export EKS_SECURITY_GROUP_ID=
5. Create cluster, verify its creation and generating appropriate config.
#Create
aws eks --region ${EKS_CLUSTER_REGION} create-cluster --name ${EKS_CLUSTER_NAME} --role-arn ${EKS_ROLE_ARN} --resources-vpc-config subnetIds=${EKS_SUBNETS_ID},securityGroupIds=${EKS_SECURITY_GROUP_ID}
#Verify
watch aws eks --region ${EKS_CLUSTER_REGION} describe-cluster --name ${EKS_CLUSTER_NAME} --query cluster.status
#Create .kube/config entry
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
Can you please check article and confirm you havent missed any steps during installation?
I have deployed a docker Image via ECS Task Definitions picked up from ECR.
The Task definition json is given below.
I have mapped container port as 80 &
Network Mode : awsvpc
But when the ECS service is started and docker runs in an EC2 instance but the ports are not mapped. I verified the same by logging into the EC2 instance and triggering
docker ps
I am using Load Balancer as of now. Wanted to first get the containers working and accessible
via 80 port.
Kindly help me figure out what is wrong in the given config
With awsvpc the Security Group >> Inbound rules are important.
You need to make sure that the Container port mapping is actually allowed in the Security Group >> Inbound rules of your ECS Service
I'm trying to telnet from a docker instance on Elastic Beanstalk to a different EC2 instance within the same VPC. I've created a security group allowing inbound traffic from the Elastic Beanstalk security group id to the other EC2 instance.
after ssh'ing into one of the Elastic Beanstalk instances, I can confirm that I am able to telnet from Elastic Beanstalk instance to the other EC2 instance.
Successful:
[root#ip-111-11-11-111 ~]# telnet 222.22.22.22 9999
Trying 222.22.22.22...
Connected to 222.22.22.22.
Escape character is '^]'
But, when I connect to the docker container interactively (via docker run -it) and try to run the same command above, no connection is made:
failure:
[root#ip-111-11-11-111 ~]# sudo su -
[root#ip-111-11-11-111 ~]# docker exec -it my_instance /bin/sh
/path-of-user # telnet 222.22.22.22 9999
(hangs here, never connects)
So clearly the security group works for the Elastic Beanstalk instance but not the docker instance inside of the Elastic Beanstalk instance. I'm not sure what the correct changes to the security group would be to allow traffic from the docker instance inside of the Elastic Beanstalk instance to the different EC2 instance. Any help would be greatly appreciated!
If I were you I'd check docker's configuration, e.g., if you do sudo docker ps can you see that your docker have ports forwarding configured correctly? You should have something like 0.0.0.0:80->80/tcp.
The telnet command inside the docker container ended up being a false positive of the connection to the external ip not working. After further debugging the issue, the connection was actually being made, but apparently the Alpine distro that I was running in docker simple does not output anything even though it was indeed connecting. I was able to confirm the connection when I noticed messages successfully passing through my external Kafka setup.