Port mapping not added to ECS container - amazon-web-services

I'm attempting to run a web server on an ECS cluster, using the 'amazon/amazon-ecs-sample' container.
My cluster is backed by an ASG-based capacity provider.
I've attempted to achieve this, by using the host network mode on my task, and adding a TCP port mapping from host port 80 to container port 80 on the container like so:
const taskDefinition = new ecs.Ec2TaskDefinition(
this,
'TaskDef',
{
networkMode: ecs.NetworkMode.HOST,
}
);
taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromRegistry('amazon/amazon-ecs-sample'),
memoryReservationMiB: 256,
portMappings: [{
containerPort: 80,
hostPort: 80,
protocol: ecs.Protocol.TCP
}]
});
However, when I ssh into my host, and run sudo docker ps, it does not appear that this port mapping exists:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fd0869a3617 amazon/amazon-ecs-sample "/usr/sbin/apache2 -…" 9 minutes ago Up 9 minutes ecs-SdInfraStackTaskDef029720EC-4-web-e695b2eaf8acf8c32500
36174f23403b amazon/amazon-ecs-agent:latest "/agent" 18 hours ago Up 18 hours (healthy) ecs-agent
What is missing and how can I achieve this?

Related

My docker container which is running inside AWS ElasticBeanstalk is not able to connect with the host

My application runs on port 5000 and I have exposed the 5000 port in the docker file.
This is my docker-compose.yml file
"services":
"backend":
"image": "<imageURL>"
"ports":
- "5000:8080"
Container port and application port: 5000
Server port: 8080
The security group of have also been configured properly and the application is able to connect with the database but not working as I try to ping that IP of the server.
My application has an ping API.
Not sure what are you referring to with EBS server as "EBS" is Elastic Block Storage, not the computing
If you're using AWS ECS you need to configure "PortMapping" to map external ports with the container ports
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
If you're using EC2 make sure that your have your service listening to all IPs using the netstat command
netstat -anlp | grep [your port]
and security group inbound and outbound rules are configured properly

How can I join a Consul agent to a Consul cluster via a Consul server running in a different Fargate task with CloudFormation?

I'm currently working as an intern and have to host a microservice application. I chose to use AWS ECS in combination with Fargate tasks to host a Consul Connect Service Mesh that provides service discovery, intentions, Mtls and so on for the application. My set-up is as follows:
1 Fargate task (in a service) with a Consul server container.
x Fargate tasks (in a service) with a Consul agent, Consul Connect sidecar and a microservice container.
I am using CloudFormation to deploy that infrastructure automatically.
The problem:
I need to join the Consul agent running in one Fargate-task to the Consul cluster started in another Fargate task (task of the server) using CloudFormation. The problem I have with doing that is that I did not find any way to get the IP address to join the agent to the cluster.
Does anyone know how I can get the IP address of a Fargate task in CloudFormation or the best practice way of doing this?
If I am not mistaken you can only join a Consul agent to a Consul cluster using IP, DNS name or Cloud metadata. The first and second one I could not retrieve using CloudFormation and for the third one I found that it might not be possible (I could be wrong, but thats what I read so far).
Also I tried both the Consul agent -ui [...] -join and -retry-join flag, but neither one worked. I also tried creating an internal loadbalancer for the task with the Consul server from which I used the DNS name to try to join the Cluster, but that did not work either (I have never set-up a loadbalancer properly on AWS yet so I might have done that wrong). I tried that with the loadbalancer forwarding traffic to port 8500 (which was the wrong port I think) and afterwards with port 8301 (which I think was the right port). But I kept getting the message that there was no Consul Cluster on that address.
Can anyone tell me how I can proceed?
Thank you in advance!
Thanks to a very smart colleague of mine, I found that putting a load balancer (which I set up wrong earlier) in front of the ECS service with the Consul Server Fargate task solved my problem. The load balancer listener should listen on the SERF port 8301 tcp_udp and forward traffic to the service with that protocol and port.
ConsulServerTargetGroup8301:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckEnabled: true
HealthCheckIntervalSeconds: 30
UnhealthyThresholdCount: 3
HealthyThresholdCount: 3
Name: ConsulServerTargetGroup8301
Port: 8301
Protocol: TCP_UDP
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60 # default is 300
TargetType: ip
VpcId: !Ref VPC
ConsulServerTargetGroupListener8301:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn: !Ref ConsulServerTargetGroup8301
Type: forward
Port: 8301
Protocol: TCP_UDP
LoadBalancerArn: !Ref ConsulServerLoadbalancer
ConsulServerLoadbalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: ConsulServerLoadbalancer
IpAddressType: ipv4
Type: network
Scheme: internal
SubnetMappings:
- SubnetId: !Ref PrivateSubnet1a
- SubnetId: !Ref PrivateSubnet1b
You can then use the DNS name of the load balancer to join a Consul agent to the consul cluster using:
Command:
- !Sub >-
consul agent
-ui
-data-dir consul/data
-advertise '{{ GetPrivateInterfaces | include "network" "${pVPCIpRange}" | attr "address" }}'
-client 0.0.0.0
-retry-join ${ConsulServerLoadbalancer.DNSName}

Traefik does not want to work on port 80 AWS

Please, help me to deal with accessibility of my simple application of k8s, via traefik in AWS.
I tried to expose ports 30000-32767 on master node, in security group and app is accessible from the world, doesn't want to work just 80 port of traefik! When I tried to expose 80 port in security group of master, I got CONNECTION REFUSED, when try access my app in browser and when I delete exposed port get an error CONNECTION TIMEOUT in browser.. what is the problem??? All services of k8s are up and no errors in traefik.
KOPS:
kops create cluster \
--node-count = 2 \
--networking calico \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
K8S app.yml and traefik.yml:
app
https://pastebin.com/WtEe633x
traefik
https://pastebin.com/pnPJVPBP
When I will type myapp.com, want to get an output of echoserver app on 80 port.
You've set things up using a NodePort service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
# namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
This doesn't mean that that the service proxy will listen on port 80 from the PoV of the outside world. By default NodePort services automatically allocate their port at random. What you probably want to do is to use a LoadBalancer service instead. Check out https://github.com/Ridecell/kubernetes/blob/9e034f4d0fb38e49f808ae0852af74366f630d48/manifests/traefik.yml#L152-L171 for an example.
Omg, problem was the next.. I have illegal domain name, so I tried to register a new free legal domain on freenom.com. Set Amazon's NS records in domain settings, created hosted zone of new domain in R53, with alias A record to domain name of loadbalancer and it works! Also changed type: NodePort to type: LoadBalancer in service config of traefik.

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
I have created a Deployment using the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
The deployment is running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
A single pod has been created on the worker node:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
From the worker node, I can curl the pod and get the information:
$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
I have created a service using the following configuration:
kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
The service is up and running:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.
If I curl the service from the worker node it works just as expected:
$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:
$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?
I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.
Depends of which mode you using it working different in details, but conceptually same.
You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.
IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.
That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.
So, service IP address desired to be available from other pod, not from nodes.
You can read in official documentation about how the Service Virtual IPs works.
kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.

App running in Docker on EB refuses connecting to self

I have a Play 2 web application, which I deploy to Elastic Beanstalk using Docker. In this web app, I start an Akka cluster. The starting procedure involves adding all nodes in the autoscaling group as seed nodes (including itself). On the first deploy to EB I specify to deploy to a VPC (I only select one availability zone).
When I run the app and start the cluster, I get the following message:
AssociationError [akka.tcp://cluster#localhost:2551] -> [akka.tcp://cluster#172.31.13.25:2551]: Error [Invalid address: akka.tcp://cluster#172.31.13.25:2551] [
akka.remote.InvalidAssociation: Invalid address: akka.tcp://cluster#172.31.13.25:2551
Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: /172.31.13.25:2551
Where 172.31.13.25 is the IP of the EC2 instance, and 2551 is the port.
In my Dockerfile I have "EXPOSE 9000 2551". In the EC2 Security Group I have enabled all inbound traffic to
0.0.0.0/0 (and all outbound traffic). In the VPC Network ACLs (and security groups) I've also opened for all traffic.
This is my Dockerfile
FROM dockerfile/java:latest
MAINTAINER a <a#b.de>
EXPOSE 9000 2551
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/myapp"]
CMD []
Why does my EC2 instance refuse a connection to itself on port 2551?
Turns out this is not possible as of now using Docker on Elastic Beanstalk.
It is, however, possible using Tomcat.
Using play/activator, you can deploy a WAR file. By injecting the following .ebextensions config file into the war file, I was able to get an extra port open between the EC2 instances:
Resources:
ExtraPortsSGIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: { "Ref" : "AWSEBSecurityGroup" }
IpProtocol: "tcp"
FromPort: "2551"
ToPort: "2551"
SourceSecurityGroupId: { "Ref" : "AWSEBSecurityGroup" }