Container instance network - amazon-web-services

I am having troubles to connect one ECS container instance(www, python) to another container instance (redis).
I am getting an "connecting to 0.0.0.0:6379. Connection refused" error from the www container.
Both instances are running on the same host and were created using two task definitions each containing one docker image.
Both use Bridge networking mode. Each task is executed by means of a service.
I also did setup service discovery for both services.
Things I did do and try:
Assure that Redis is bound to 0.0.0.0 and not 127.0.0.1
Added port mappings for www (80) and redis container (6379)
ssh'ed into the ec2 instance to assure port mappings are ok. I can telnet to both port 80 and 6379
connected to the www instance and tested by means of the python console if 0.0.0.0:6379 was available.
It wasn't the case. I also tried with the docker(redis) IP address 172.17.0.3 without luck. I also tried using the .local service discovery name of the redis container without luck. The service discovery name did not resolve
resolving the service discovery name from the ec2 container (using dig): that did work but returned a 10.0.* address
I am a bit out of option why this is the case. Obviously things do work on a local development machine.
Update 10/5: I changed container networking to type "host" which appears to be working. Still not understanding why "bridge" won't work.

Related

Access AWS SSM Port inside Docker container

I try to access some AWS resources from inside a docker container. Therefore I have a PortForwarding SSM session running on the host and everything works fine when I try to access the resources via localhost:<port>.
However, inside of a docker container I cannot access these same resources via 172.17.0.1:<port>. Host communication per se seems to work just fine, as I can communicate with a local web server via 172.17.0.1:8000. Only the combination of SSM and docker seems to be a problem.
nmap inside of the container also shows the port as closed.
Is there any way to get the combination of SSM and docker up and running?
I suspect that what is happening is that AWS SSM is port forwarding to localhost and is bound to the loopback adaptor.
If I run aws ssm port forwarding, I am able to access the port on the localhost and not via my machine IP:Port.
So when docker tries to access the port via its own natted IP it is unable to connect to the port.
I have the same issue that I am trying to solve with miniKube. Since I am only able to access my ports via localhost on my system, minikube is unable to access my localports.
If I understand correctly, you try to connect to a webserver from your container host and this works, but when logged into the docker container itself you cannot reach it?
If this is what you meant, It could be related to the fact that containers have a different network interface from the host and thus different security groups. If the receiving server's security group is configured to allow traffic from the host, but not from the security group of the containers running on the host, it would be a possible explanation for what you experienced.

Call Container running on AWS EC2

I have a Linux EC2 with docker running inside of it.
Docker is running 3 services. 2 are workers and one is an API.
I want to be able to call that API from outside the system but I am getting "This site can't be reached".
The service is running and the call is a simple ping which works locally through VS so I don't believe that is the issue.
My security group has all traffic allowed with 0.0.0.0/0.
I have attempted the following urls but no luck:
http://ec2-{public-ip}.ap-southeast-2.compute.amazonaws.com/ping
http://{public-ip}/ping
http://{public-ip}/172.17.0.2/ping (containers IP address)
Based on the comments.
EXPOSE does not actually "exposes" a port:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
Thus, to expose port 80 from your container to the instance you have to use -p 80:80 option.

Deploying a Go app in AWS ec2 got connection refused

I have a compiled Go project that I want to deploy to an AWS EC2 instance. I just simply upload the application and run ./application on the remote server.
In the terminal, the application is running and says he's listening to localhost:3000.
I've already added the 3000 port to the security group.
However, when I tried to access it in my browser using <public-ip>:3000, it always shows connection refused, whether I've run the application or not.
I tried to run the app locally, it does work.
So is it because I deploy it incorrectly?
It is a bit difficult to help you because of no code being shared.
Some reasons why you got connection refused:
Your application is listening only localhost:3000
EC2 security group does not expose port 3000
How to fix:
Most applications are defining the host address on a config file or env variables. If you have access to change it, change it from localhost:3000 to 0.0.0.0:3000 to accepts connection from all IP or to your_ec2_public_ip:3000
If host address is hardcoded and you have access to code, change the code per above
If you don't have access to config or code to change the host address, then add a reverse proxy to route the incoming call to localhost:3000. This is a good link about using Nginx as reverse proxy https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Ensure EC2 Security Group allowing inbound connection for the designated port, in this case, is 3000 if you manage to route the incoming to your_ip:3000

Problems trying to run a custom SFTP service on port 22 on Amazon ECS

I have build a Node.js app that implements an SFTP server; when files are put to the server, the app parses the data and loads it into a Salesforce instance via API as it arrives. The app runs in a Docker container, and listens on port 9001. I'd like it run on Amazon's EC2 Container Service, listening on standard port 22. I can run it locally, remapping 9001 to host 22, and it works fine. But because 22 is also used by SSH, I'm not having any luck running it on ECS. Here are the steps I've taken so far:
Created an EC2 instance using the AMI amzn-ami-2016.03.j-amazon-ecs-optimized (ami-562cf236).
Assigned the instance to a Security Group that allows port 22 (was already present).
Created an ECR registry and pushed my Docker image up to it.
Created an ECS Task Definition for the image, which contains a port mapping from host port 22 to container port 9001
Created a service for the task and associated to the Default ECS Cluster, which contains my EC2 instance.
At this point, which viewing the "Events" tab of the Service view, I see the following error:
service sfsftp_test_service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXXX is already using a port required by your task. For more information, see the Troubleshooting section.
I assumed that this is because my Task Definition is trying to map host port 22 which is reserved for SSH, so I tried creating a new version of the Task Definition that maps 9001 to 9001. I also updated my security group to allow port 9001 access. This task was started on my instance, and I was able to connect and upload files. So at this point I know that my Node.js app and the Docker instance are correct. It's a port mapping issue.
In trying to resolve the port mapping issue, I found this stackoverflow question about running SSH on an alternate port on EC2, and used the answer there to change my sshd to run on port 9022. I also updated the Security Group to allow traffic on port 9022. This worked; I can now SSH to port 9022, and I can no longer ssh to port 22.
However, I'm still getting the The closest matching container-instance XXXX is already using a port required error. I also tried editing the Security Group, changing the default port 22 grant from "SSH" to "Custom TCP Rule", but that change doesn't stick; I'm also not convinced that it's anything but a quick way to pick the right port.
When I view the Container instance from the Cluster screen, I can see that 5 ports are "registered", including port 22:
According to this resolved ECS Agent github issue, those ports are "reserved by default" by the ECS Agent. I'm guessing this is why ECS refuses to start my Docker image on the EC2 Instance. So is this configurable? Can I "unreserve" port 22 to allow my Docker image to run?
Edit to add: After reviewing this ECS Agent documentation, I've opened an issue on the ECS Agent Github as well.

How to connect to 'real' localhost from inside my VM boot2docker container?

Amazon AWS doesn't allow ElastiCache/Redis instances to be accessible outside of EC2 instances (outside as in, from my laptop). So for dev purposes, this means my docker containers need to reference the redis instance running on my local Mac.
But how do I map the redis server running on 6379 on my localhost into my boot2docker container? I somehow need to tell boot2docker to route some domain like my_real_localhost to 127.0.0.1 outside my VM.
From the point of view of a container running at Amazon (or inside Boot2Docker), it just needs to know an IP address of your Mac that it can connect to via any NAT routers and firewalls you are running, to port 6379 there.