Unable to bind docker container to secondary interface - amazon-web-services

I have an EC2 instance (running CentOS 7) with two network interfaces on it. The primary is ens5 and the secondary was attached as eth0. What I'm attempting to do is bind my docker container to eth0, so that both incoming and outgoing traffic is associated with the IP address of eth0.
I have a couple of external ports exposed. The first thing I tried in my docker run command was to just do eth0_ip:port:port. The container did start up successfully, and I was able to hit the container from the host on the IP of eth0, however when making requests from other EC2 instances in the same VPC, the requests timed out. Using tcpdump I was able to confirm that external requests are making it to the instance, however requests aren't making it to the container.
I also attempted to create a new network associated with the IP address of eth0, and then set the --network flag in my run command, but I was greeted with the same exact failure.
Any help would be greatly appreciated!

Related

How do I use a cloud-init ubuntu image with Rancher on vSphere, with a network without dhcp?

Long story short: using a network without dhcp to deploy a new cluster from Rancher to vSphere causes a timeout on "waiting for ssh".
I am using a network protocol profile and vApp settings to set the static ip on the nodes.
I followed this guide:
https://www.virtualthoughts.co.uk/2020/03/29/rancher-vsphere-network-protocol-profiles-and-static-ip-addresses-for-k8s-nodes/
But when I disable cloud-inits initial network configuration, the nodes are never assigned the static IP from vApp. Without disabling the initial configuration, the first boot takes around 2 minutes (because it waits for dhcp, and fails), but it DOES apply the static ip from vApp afterwards - but unfortunately the 2 minutes it waits for dhcp is enough for Rancher to timeout waiting for ssh.
It would appear that my issue was that the network assigned to the nodes were unable to provide ipv6 through dhcp, and this caused the netplan to not apply the ipv4 static ips - but also didn't throw an error.
I found it through:
journalctl -b -u systemd-networkd
After I updated the netplan configuration with:
link-local: [ipv4]
The nodes now get their static ip's correctly

Google Compute engine instance all Ports are busy

I want to connect my instance with another instance through sockets and i can choose the port on which my instance build up the connection. It does not matter what port i take between 1024 and 65535, i get the information that the port is busy and i should choose another port.
Does somebody know what to do ?
If i take the internal ip address, it works, but the other instance can not contact my instance.
First, You can check your VM instance if there’s an active firewall.
For Debian/Ubuntu you can run the command:
‘sudo ufw status’
For Centos/Redhat you can run the command:
‘sudo firewall-cmd --state’
Basically if there are no active firewalls inside your two VM instances and within the same VPC it should be able to connect to each other.
In addition, you can install “Nmap” to scan the open ports to the other VM instance.

Access AWS SSM Port inside Docker container

I try to access some AWS resources from inside a docker container. Therefore I have a PortForwarding SSM session running on the host and everything works fine when I try to access the resources via localhost:<port>.
However, inside of a docker container I cannot access these same resources via 172.17.0.1:<port>. Host communication per se seems to work just fine, as I can communicate with a local web server via 172.17.0.1:8000. Only the combination of SSM and docker seems to be a problem.
nmap inside of the container also shows the port as closed.
Is there any way to get the combination of SSM and docker up and running?
I suspect that what is happening is that AWS SSM is port forwarding to localhost and is bound to the loopback adaptor.
If I run aws ssm port forwarding, I am able to access the port on the localhost and not via my machine IP:Port.
So when docker tries to access the port via its own natted IP it is unable to connect to the port.
I have the same issue that I am trying to solve with miniKube. Since I am only able to access my ports via localhost on my system, minikube is unable to access my localports.
If I understand correctly, you try to connect to a webserver from your container host and this works, but when logged into the docker container itself you cannot reach it?
If this is what you meant, It could be related to the fact that containers have a different network interface from the host and thus different security groups. If the receiving server's security group is configured to allow traffic from the host, but not from the security group of the containers running on the host, it would be a possible explanation for what you experienced.

Configuring local laptop as puppet server and aws ec2 instance as puppet agent

I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open

Container instance network

I am having troubles to connect one ECS container instance(www, python) to another container instance (redis).
I am getting an "connecting to 0.0.0.0:6379. Connection refused" error from the www container.
Both instances are running on the same host and were created using two task definitions each containing one docker image.
Both use Bridge networking mode. Each task is executed by means of a service.
I also did setup service discovery for both services.
Things I did do and try:
Assure that Redis is bound to 0.0.0.0 and not 127.0.0.1
Added port mappings for www (80) and redis container (6379)
ssh'ed into the ec2 instance to assure port mappings are ok. I can telnet to both port 80 and 6379
connected to the www instance and tested by means of the python console if 0.0.0.0:6379 was available.
It wasn't the case. I also tried with the docker(redis) IP address 172.17.0.3 without luck. I also tried using the .local service discovery name of the redis container without luck. The service discovery name did not resolve
resolving the service discovery name from the ec2 container (using dig): that did work but returned a 10.0.* address
I am a bit out of option why this is the case. Obviously things do work on a local development machine.
Update 10/5: I changed container networking to type "host" which appears to be working. Still not understanding why "bridge" won't work.