App running in Docker on EB refuses connecting to self - amazon-web-services

I have a Play 2 web application, which I deploy to Elastic Beanstalk using Docker. In this web app, I start an Akka cluster. The starting procedure involves adding all nodes in the autoscaling group as seed nodes (including itself). On the first deploy to EB I specify to deploy to a VPC (I only select one availability zone).
When I run the app and start the cluster, I get the following message:
AssociationError [akka.tcp://cluster#localhost:2551] -> [akka.tcp://cluster#172.31.13.25:2551]: Error [Invalid address: akka.tcp://cluster#172.31.13.25:2551] [
akka.remote.InvalidAssociation: Invalid address: akka.tcp://cluster#172.31.13.25:2551
Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: /172.31.13.25:2551
Where 172.31.13.25 is the IP of the EC2 instance, and 2551 is the port.
In my Dockerfile I have "EXPOSE 9000 2551". In the EC2 Security Group I have enabled all inbound traffic to
0.0.0.0/0 (and all outbound traffic). In the VPC Network ACLs (and security groups) I've also opened for all traffic.
This is my Dockerfile
FROM dockerfile/java:latest
MAINTAINER a <a#b.de>
EXPOSE 9000 2551
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/myapp"]
CMD []
Why does my EC2 instance refuse a connection to itself on port 2551?

Turns out this is not possible as of now using Docker on Elastic Beanstalk.
It is, however, possible using Tomcat.
Using play/activator, you can deploy a WAR file. By injecting the following .ebextensions config file into the war file, I was able to get an extra port open between the EC2 instances:
Resources:
ExtraPortsSGIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: { "Ref" : "AWSEBSecurityGroup" }
IpProtocol: "tcp"
FromPort: "2551"
ToPort: "2551"
SourceSecurityGroupId: { "Ref" : "AWSEBSecurityGroup" }

Related

Why does AWS ECS allows inbound traffic to ALL ports by default?

I am deploying the following relatively simple docker-compose.yml file on AWS ECS via the Docker CLI.
It uses tomcat server image which can be also replaced by any other container which does not exits of startup.
services:
tomcat:
image: tomcat:9.0
command: catalina.sh run
ports:
- target: 8080
published: 8080
x-aws-protocol: http
Commands used
docker context use mycontextforecs
docker compose up
The cluster, services, task, target, security groups and application load balancer are automatically created as expected.
But, the security group created by AWS ECS allows inbound traffic on ALL ports by default instead of only the exposed 8080.
Following is a screenshot of the security group, which also has a comment -
"tomcat:8080/ on default network"
But port range is "All" instead of 8080
I've read the following and some other stackoverflow links but could not get an answer.
https://docs.docker.com/cloud/ecs-compose-features/
https://docs.docker.com/cloud/ecs-architecture/
https://docs.docker.com/cloud/ecs-integration/
I understand that the default "Fargate" instance type gets a public ip assigned.
But why does ECS allow traffic on all ports?
If I add another service in the docker-compose file, the default security group gets shared between both of them.
As a result, anyone can telnet into the port exposed by the service due to this security group rule.

My docker container which is running inside AWS ElasticBeanstalk is not able to connect with the host

My application runs on port 5000 and I have exposed the 5000 port in the docker file.
This is my docker-compose.yml file
"services":
"backend":
"image": "<imageURL>"
"ports":
- "5000:8080"
Container port and application port: 5000
Server port: 8080
The security group of have also been configured properly and the application is able to connect with the database but not working as I try to ping that IP of the server.
My application has an ping API.
Not sure what are you referring to with EBS server as "EBS" is Elastic Block Storage, not the computing
If you're using AWS ECS you need to configure "PortMapping" to map external ports with the container ports
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
If you're using EC2 make sure that your have your service listening to all IPs using the netstat command
netstat -anlp | grep [your port]
and security group inbound and outbound rules are configured properly

How can I join a Consul agent to a Consul cluster via a Consul server running in a different Fargate task with CloudFormation?

I'm currently working as an intern and have to host a microservice application. I chose to use AWS ECS in combination with Fargate tasks to host a Consul Connect Service Mesh that provides service discovery, intentions, Mtls and so on for the application. My set-up is as follows:
1 Fargate task (in a service) with a Consul server container.
x Fargate tasks (in a service) with a Consul agent, Consul Connect sidecar and a microservice container.
I am using CloudFormation to deploy that infrastructure automatically.
The problem:
I need to join the Consul agent running in one Fargate-task to the Consul cluster started in another Fargate task (task of the server) using CloudFormation. The problem I have with doing that is that I did not find any way to get the IP address to join the agent to the cluster.
Does anyone know how I can get the IP address of a Fargate task in CloudFormation or the best practice way of doing this?
If I am not mistaken you can only join a Consul agent to a Consul cluster using IP, DNS name or Cloud metadata. The first and second one I could not retrieve using CloudFormation and for the third one I found that it might not be possible (I could be wrong, but thats what I read so far).
Also I tried both the Consul agent -ui [...] -join and -retry-join flag, but neither one worked. I also tried creating an internal loadbalancer for the task with the Consul server from which I used the DNS name to try to join the Cluster, but that did not work either (I have never set-up a loadbalancer properly on AWS yet so I might have done that wrong). I tried that with the loadbalancer forwarding traffic to port 8500 (which was the wrong port I think) and afterwards with port 8301 (which I think was the right port). But I kept getting the message that there was no Consul Cluster on that address.
Can anyone tell me how I can proceed?
Thank you in advance!
Thanks to a very smart colleague of mine, I found that putting a load balancer (which I set up wrong earlier) in front of the ECS service with the Consul Server Fargate task solved my problem. The load balancer listener should listen on the SERF port 8301 tcp_udp and forward traffic to the service with that protocol and port.
ConsulServerTargetGroup8301:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckEnabled: true
HealthCheckIntervalSeconds: 30
UnhealthyThresholdCount: 3
HealthyThresholdCount: 3
Name: ConsulServerTargetGroup8301
Port: 8301
Protocol: TCP_UDP
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60 # default is 300
TargetType: ip
VpcId: !Ref VPC
ConsulServerTargetGroupListener8301:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn: !Ref ConsulServerTargetGroup8301
Type: forward
Port: 8301
Protocol: TCP_UDP
LoadBalancerArn: !Ref ConsulServerLoadbalancer
ConsulServerLoadbalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: ConsulServerLoadbalancer
IpAddressType: ipv4
Type: network
Scheme: internal
SubnetMappings:
- SubnetId: !Ref PrivateSubnet1a
- SubnetId: !Ref PrivateSubnet1b
You can then use the DNS name of the load balancer to join a Consul agent to the consul cluster using:
Command:
- !Sub >-
consul agent
-ui
-data-dir consul/data
-advertise '{{ GetPrivateInterfaces | include "network" "${pVPCIpRange}" | attr "address" }}'
-client 0.0.0.0
-retry-join ${ConsulServerLoadbalancer.DNSName}

Docker container deployed via Beanstalk cannot connect to the database on RDS

I'm new to both docker and AWS. I just created my very first docker image. The application is a backend microservice with rest controllers persisting data in a MySQL database. I've manually created the database in RDS and after running the container locally, the rest APIs work fine in Postman.
Here is the Dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER alireza.online
COPY ./target/Practice-1-1.0-SNAPSHOT.jar /myApplication/
COPY ./target/libs/ /myApplication/libs/
EXPOSE 8080
CMD ["java", "-jar", "./myApplication/Practice-1-1.0-SNAPSHOT.jar"]
Then I deployed the docker image via AWS Beanstalk. Here is the Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "aliam/backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
And everything went well:
But now, I'm getting "502 Bad Gateway" in postman when trying to run "backend.us-east-2.elasticbeanstalk.com/health".
I checked the log on Beanstalk and realized that the application has problem connecting to the RDS database:
"Could not create connection to database server. Attempted reconnect 3 times. Giving up."
What I tried to do to solve the problem:
1- I tried to assign the same security group the EC2 instance is using to my RDS instance, but it didn't work.
2- I tried to make more inbound rules on the security group to add public and private IPs of the EC2 instance but I was not sure about the port and the CIDR I should define and couldn't make it.
Any comment would be highly appreciated.
Here are resources in your stack:
LoadBalancer -> EC2 instance(s) -> MySQL database
All of them need to have SecurityGroups assigned to them, allowing connections on the right ports to the upstream resources.
So, if you assign sg-1234 security group to your EC2 instances, and sg-5678 to your RDS database, there must be a rule existing in the sg-5678 allowing inbound connections from sg-1234 (no need for CIDRs, you can open a connection from SG to SG). The typical MySQL port is 3306.
Similarly, the LoadBalancer (which is automatically created for you by ElasticBeanstalk) must have access to your EC2 instance's 8080 port. Furthermore, if you want to access your instances with the "backend.us-east-2.elasticbeanstalk.com/health" domain name, the loadbalancer would have to listen on port 80 and have a target group of your instances on 8080 port.
Hope this helps!

Kubernetes Cluster on AWS with Kops - NodePort Service Unavailable

I am having difficulties accessing a NodePort service on my Kubernetes cluster.
Goal
set up ALB Ingress controller so that i can use websockets and http/2
setup NodePort service as required by that controller
Steps taken
Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
Changed service type from LoadBalancer to NodePort using kubectl replace --force
> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
Results
Curl from ALB hangs
Curl from <public ip address of all nodes>:<node port for service> hangs
Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)
Update:
Issues created on github referencing above with some further details in some cases:
https://github.com/kubernetes/kubernetes/issues/50261
https://github.com/coreos/alb-ingress-controller/issues/169
https://github.com/kubernetes/kops/issues/3146
By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.
In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS.
Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as nodes.<your cluster name> and master.<your cluster name>
We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.
Click on the security group, click on rules and add the following rule.
Port range to open on the nodes and master: 30000-32767
This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.
Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.
The SG of master is not needed to open the nodeport range in order to make : working.
So only the Worker's SG needs to open the port range.