Expose port of AWS EC2 instance to entire network - amazon-web-services

I have an app which is deployed via Docker on one of our legacy servers and want to deploy it on AWS. All instances reside on the company's private network. Private IP addresses:
My local machine: 10.0.2.15
EC2 instance: 10.110.208.142
If I run nmap 10.110.208.142 from within the Docker container, I see port 443 is open as intended. But I if run that command from another computer on the network, e.g. from my local machine, I see that port is closed.
How do I open that port to the rest of the network? In the EC2 instance, I've tried:
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
and it does not resolve the issue. I've also allowed the appropriate inbound connections on port 443 in my AWS security groups (screenshot below):
Thanks,

You cannot access EC2 instances in your AWS VPC network from your network outside of AWS using private IP addresses of the EC2 instances using the public Internet. This is why EC2 instances can have two types of IP addresses: Public and Private.
If you setup a VPN from your corporate network to your VPC then you will be able to access EC2 instances using private IP addresses. Your network and the AWS VPC network cannot have overlapping networks (at least not without fancier configurations).
You can also assign a public IP address (which can change on stop / restart) or add an Elastic IP address to your EC2 instances and then access them over the public Internet.
In either solution you will also need to configure your security groups to allow access over the desired ports.

Found the issue. I'm using nginx and nginx failed to start, which explains why port 443 appeared to be closed.
In my particular case, nginx failed because I was missing the proper ssl certificate.

Related

Accessing docker container in the AWS EC2 public IP

I have an architecture as shown below:
sorry for the mess with all the ports
So, the EC2 instance it is running and I am able to access the instance via SSH (port 22). I am also able to access the contents of the container which is running in the EC2 instance if I forward the ports via SSH. BUT, I am not able to access this same content if I try to connect via public IP of the EC2.
As you can see the security group is created and the ports allowed.
When I run in the EC2: sudo firewall-cmd --list-all I can see that the ports: 80/tcp 8080/tcp 8071/tcp 8063/tcp are allowed.
I am pretty new in AWS/Docker and cannot figure it out how to access container via public IP
I have tried updating security groups and also allowing ports in EC2 thinking that maybe firewall might block the communication but still the access was not possible

AWS OpenVPN instance can't ssh to other ec2 instances of connect to web

I set up an OpenVPN EC2 instance on AWS and it has security groups like
I downloaded the client.ovpn file and can successfully connect to it like sudo openvpn --config client.ovpn in Ubuntu (and also via Network Manager after importing the config). All good.
Now I want to make it so my other EC2 instances (that host the actual app) can only be accessed via the VPN, and can't be SSH'd into directly for example. The security group of one of these EC2 instances looks like
where here I'm allowing inbound traffic on port 22 from the Private IPv4 addresses of the OVPN server.
However, if I connect to the VPN and try to SSH to the app EC2 instance it just times out, nor can I access the web when connected to the VPN.
If I allow SSH on port 22 from 0.0.0.0 then I can SSH in and no issues.
Could anyone point me toward what the problem might be?
Could it be because they are on different subnets?
The simple solution: Forward all traffic through OpenVPN. Restrict and connect to your instances with OpenVPN's public IP, connect to your EC2s through their public IPs
The reason why your solution did not work as I understand it
AWS VPC is kind of like a VPN already
You are trying to connect to your EC2 through their public IP which routes through the internet so it makes litte sense allowing OpenVPN's private IP as to talk with EC2's public IP the server that you are installing OpenVPN shall use their public IP
If you must use OpenVPN and does not want the internal (OpenVPN to EC2) connections to surface to the internet, the EC2 instances must join OpenVPN's private network, there, everyone can talk using the private IPs of OpenVPN's range
Or extend AWS VPC with OpenVPN
Or see if split-tunnel work which "May allow users to access their LAN devices while connected to VPN"

HashiCorp Vault is not accessible out side of EC2

I have installed HashiCorp vault in a Linux EC2 machine in AWS. I have unsealed it and allowed all the outbound traffic in Security Group. I am able to access the Vault service within EC2 instance using "http://localhost:8200". But I am unable to use the service when I try to hit the URL using public IPV4 of the EC2 from internet (ex: http://xxx.xxx.xxx.xxx:8200).
Check your network configurations.
There are a few things you can check:
Your Security Group allow connections from your IP to the port 8200
Your EC2 instance is in a public subnet.
The NACL of public subnet allows connections to/from the port 8200 and to/from your IP.
The Route Table of public subnet has attached an Internet Gateway.
If you validate this 4 points and still can't connect with the service, it can be a problem of the service listen-address is 127.0.0.1 (localhost).
https://www.vaultproject.io/docs/commands/server.html#dev-listen-address
In that case, you should start your HashiCorp Vault with the options:
-dev -dev-listen-address="0.0.0.0:8200"
This problem is described here:
Is it possible to start Vault dev server on 0.0.0.0 instead of 127.0.0.1?

Accessing RDS through bastion host with port forwarding not working

I'm trying to establish a port forwarding to my RDS in a private subnet via a bastion host in a public subnet with the following command:
ssh -A -NL 3007:mydb3.co2qgzotzkku.eu-west-1.rds.amazonaws.com:3306 ubuntu#ec2-562243-250-177.eu-west-1.compute.amazonaws.com
but cant get a connection to the rds instance.
The security group for the Bastion Host allows only SSH on port 22 from my IP
and the security group for the RDS allows traffic from the bastion hosts security group and SSH from my iP
Besides the ACL for the subnets are open to all traffic for TCP.
anybody a tip what is missing to get the tunnel running?
merci A
I think you are missing the port 3306 and 3307. Allow that port in the both security group and it will work.
As you said you are accessing the bastion via key-pair, your new command must be:
ssh -N -L 3007:mydb3.co2qgzotzkku.eu-west-1.rds.amazonaws.com:3306 ubuntu#ec2-562243-250-177.eu-west-1.compute.amazonaws.com -i /path/to/key.pem
I would suggest removing A from the command as it Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent.

Amazon AWS Multiple Elastic IP Squid Proxy Server

I have an Amazon AWS EC2 Ubuntu 14.0.4 server using squid proxy.
I have managed to configure the the default IP address as a proxy without an issue.
The problem is when I attach additional elastic IP addresses I cannot get them to work.
So I have an EC2 server with 2x network interfaces, both with a private and public IP addresses (one with the default public IP and another with the elastic IP). Both network interfaces are attached to the same security group with my desired proxy ports open.
Within my EC2 instance, I can see eth0 and eth1 by performing an ifconfig.
I cannot even SSH in on the elastic IP.
Within Ubuntu, I can route print and it shows eth0 and eth1 using the same default gateway. I assume this is not correct?
I think I might be missing some routing settings configured in the VPC section.
This is an example of my squid config file.
acl tasty3128 myportname 3128 src 172.X.X.X/24
http_access allow tasty3128
tcp_outgoing_address 67.xxx.108.128 tasty3128
acl tasty3129 myportname 3129 src 172.X.X.X/24
http_access allow tasty3129
tcp_outgoing_address 67.xxx.108.79 tasty3129
thankyou