Cannot access ecs ec2 instance public ip - amazon-web-services

I am connecting 2 docker container one python app container and redis container with each other using ECS. When I ssh into ec2 instance and curl localhost:5000 it gives the output which means the container are running and python is connected with redis. I am trying to achieve the result with ECS Ec2
But when I search the public ip of ec2 it doesnot show anything. My task configuration setting:
and in python container I am having this setting, pulling the image from ecr and giving the port 5000:5000 and in link giving the name of redis so the it can connect to redis container. What setting Am I missing so that I can hit the python app container without doing ssh to the ec2-instance and doing curl localhost:5000

If the application is accessible on localhost then possible reasons are
Instance security group does not allow 5000 Port, so check the security group and allow HTTP traffic on port 5000.
An instance might be in a private subnet, but how you do ssh from bastion or directly? if directly then the issue should be resolved by following step 1.
Use the public IP that you used for ssh and it should work
http://ec2Public_Ip:5000

Related

Accessing docker container in the AWS EC2 public IP

I have an architecture as shown below:
sorry for the mess with all the ports
So, the EC2 instance it is running and I am able to access the instance via SSH (port 22). I am also able to access the contents of the container which is running in the EC2 instance if I forward the ports via SSH. BUT, I am not able to access this same content if I try to connect via public IP of the EC2.
As you can see the security group is created and the ports allowed.
When I run in the EC2: sudo firewall-cmd --list-all I can see that the ports: 80/tcp 8080/tcp 8071/tcp 8063/tcp are allowed.
I am pretty new in AWS/Docker and cannot figure it out how to access container via public IP
I have tried updating security groups and also allowing ports in EC2 thinking that maybe firewall might block the communication but still the access was not possible

AWS EC2 expose port 5000

I am trying to install KafkaMagic on ec2 to manage our kafka cluster. I created an EC2 instance on our VPC and added the following inbound rules to the associated security group:
I then made sure the Network ACL had inbound rules to allow traffic
Where I saw that * is a catch all rule so rule 100 should overwrite this. I then connected to my ec2 instance using EC2 Instance Connect and downloaded KafkaMagic and got it running on localhost:5000 of my ec2 instance. Using the public dns for the ec2 instance i connected to {publicIp}:5000 where publicIp was copy pasted. I was unable to connect.
Im assuming there is a gap in my understanding of what happened. Where did I go wrong along the way setting this up? Im very new to AWS and I might be missing an important concept
I needed to run the application on a non localhost url. I updated the kafka magic url through this link: https://www.kafkamagic.com/download/#configuration to be on 0.0.0.0:5000 and then I was able to use the public ip associated with my instance to run the application on port 5000

How to browse a webserver running in a docker container when the container itself is running in an AWS EC2 instance?

I have a Django webserver running in a docker container. When the container is running locally, I can see the server running by using a browser to point to the localhost port mapped with the exposed port of the container.
Now I have the same container running in an AWS EC2 instance. The container's exposed port has been mapped to a certain port of the AWS instance. How can I browse the running webserver from local? (I connect to the AWS EC2 using SSH)
First verify the application is the running on EC2 and its responding to localhost, do an ssh and run curl localhost:PUBLISH_PORT
If this responding, then run curl http://169.254.169.254/latest/meta-data/public-ipv4 this will return the public IP address of EC2 instance, open the in browser, for example
54.232.200.77:PUBLISH_PORT
Or you can get the public IP from Ec2 console.
see the public IP arrow.
Also, Allow the PUBLISH_PORT from the security group.
ec2-security-groups

Cannot telnet from docker container in elastic beanstalk to different ec2 on AWS

I'm trying to telnet from a docker instance on Elastic Beanstalk to a different EC2 instance within the same VPC. I've created a security group allowing inbound traffic from the Elastic Beanstalk security group id to the other EC2 instance.
after ssh'ing into one of the Elastic Beanstalk instances, I can confirm that I am able to telnet from Elastic Beanstalk instance to the other EC2 instance.
Successful:
[root#ip-111-11-11-111 ~]# telnet 222.22.22.22 9999
Trying 222.22.22.22...
Connected to 222.22.22.22.
Escape character is '^]'
But, when I connect to the docker container interactively (via docker run -it) and try to run the same command above, no connection is made:
failure:
[root#ip-111-11-11-111 ~]# sudo su -
[root#ip-111-11-11-111 ~]# docker exec -it my_instance /bin/sh
/path-of-user # telnet 222.22.22.22 9999
(hangs here, never connects)
So clearly the security group works for the Elastic Beanstalk instance but not the docker instance inside of the Elastic Beanstalk instance. I'm not sure what the correct changes to the security group would be to allow traffic from the docker instance inside of the Elastic Beanstalk instance to the different EC2 instance. Any help would be greatly appreciated!
If I were you I'd check docker's configuration, e.g., if you do sudo docker ps can you see that your docker have ports forwarding configured correctly? You should have something like 0.0.0.0:80->80/tcp.
The telnet command inside the docker container ended up being a false positive of the connection to the external ip not working. After further debugging the issue, the connection was actually being made, but apparently the Alpine distro that I was running in docker simple does not output anything even though it was indeed connecting. I was able to confirm the connection when I noticed messages successfully passing through my external Kafka setup.

SSH in to EB instance launched in VPC with NAT Gateway

I have Launched an Elastic Beanstalk application in a VPC with Amazon RDS (postgresql) using NAT Gateway (because I want to route my application traffic through a fix public ip address) following these instructions:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc-rds.html
How can I ssh into the instance from my local system ?
eb ssh is showing following error however my instance is available and not terminating.
ERROR: This instance does not have a Public IP address. This is possibly because the instance is terminating.
How can I login to the postgresql client ?
Following command is not prompting anything:
psql --host= --port=5432 --username= --password --dbname=ebdb
I know they are in private subnet so can't be accessed from public network but I want to know the possibility of that. Please help !
You will have to have a server with a public IP (in a public VPC subnet) that you can connect to from outside your VPC. I recommend setting up a t2.nano server as a bastion host.
If you use VPN, you can also modify sshops.py to use the private DNS name. Varies by OS and version, but mine is located here:
~/Library/Python/2.7/lib/python/site-packages/ebcli/operations/sshops.py
Search for PublicIpAddress (mine is on line 88), and change it to read:
ip = instance['PrivateDnsName'] #was PublicIpAddress
It's too bad that the EB CLI isn't on Github...otherwise I'd contribute a way to do this via a parameter.
I also added a convenient alias for this:
alias appname='eb init appname;eb ssh --region=us-east-1 appname -n'
This allows running appname 1 or appname n, where n is the number of hosts in your cluster.