I have an Amazon AWS EC2 Ubuntu 14.0.4 server using squid proxy.
I have managed to configure the the default IP address as a proxy without an issue.
The problem is when I attach additional elastic IP addresses I cannot get them to work.
So I have an EC2 server with 2x network interfaces, both with a private and public IP addresses (one with the default public IP and another with the elastic IP). Both network interfaces are attached to the same security group with my desired proxy ports open.
Within my EC2 instance, I can see eth0 and eth1 by performing an ifconfig.
I cannot even SSH in on the elastic IP.
Within Ubuntu, I can route print and it shows eth0 and eth1 using the same default gateway. I assume this is not correct?
I think I might be missing some routing settings configured in the VPC section.
This is an example of my squid config file.
acl tasty3128 myportname 3128 src 172.X.X.X/24
http_access allow tasty3128
tcp_outgoing_address 67.xxx.108.128 tasty3128
acl tasty3129 myportname 3129 src 172.X.X.X/24
http_access allow tasty3129
tcp_outgoing_address 67.xxx.108.79 tasty3129
thankyou
Related
I have an Amazon EC2 Instance running Ubuntu server 16.04
I want the EC2 to have two network interfaces. After configuring the secondary interface. I cannot ping my primary interface from my other instances. This is my configuration of the /etc/network/interfaces.d/51-secondary.cfg. I have enable to allow all traffic on the EC2 as well.
My interfaces are ens5 the primary and ens6 the secondary. My primary IP is 172.31.0.67 and the secondary is 172.31.6.43. I want my other EC2s to communicate with both my IP address. Is it possible what did I do wrong here.
auto ens6
iface ens6 inet static
address 172.31.6.43
netmask 255.255.240.0
# Gateway configuration
up ip route add default via 172.31.0.1 dev ens6 table 1000
# Routes and rules
up ip route add 172.31.6.43 dev ens6 table 1000
up ip rule add from 172.31.26.168 lookup 1000
I have installed HashiCorp vault in a Linux EC2 machine in AWS. I have unsealed it and allowed all the outbound traffic in Security Group. I am able to access the Vault service within EC2 instance using "http://localhost:8200". But I am unable to use the service when I try to hit the URL using public IPV4 of the EC2 from internet (ex: http://xxx.xxx.xxx.xxx:8200).
Check your network configurations.
There are a few things you can check:
Your Security Group allow connections from your IP to the port 8200
Your EC2 instance is in a public subnet.
The NACL of public subnet allows connections to/from the port 8200 and to/from your IP.
The Route Table of public subnet has attached an Internet Gateway.
If you validate this 4 points and still can't connect with the service, it can be a problem of the service listen-address is 127.0.0.1 (localhost).
https://www.vaultproject.io/docs/commands/server.html#dev-listen-address
In that case, you should start your HashiCorp Vault with the options:
-dev -dev-listen-address="0.0.0.0:8200"
This problem is described here:
Is it possible to start Vault dev server on 0.0.0.0 instead of 127.0.0.1?
We have a web-application page exposed at port 9090 on an EC2 instance that lives in the private subnet of our AWS setup.
We have a bastion host that is in the public subnet, and it can talk to the instance in the private subnet. We can also ssh to the instance thru the ssh tunnel of the bastion.
Is there a guide to setting up a proxy on this bastion host to access the webpage in the browser that is served on the http://PrivateSubnetEC2Isntance:9090/, by redirecting the traffic to/from http://PublicBastion:9090/?
I tried setting up a HAProxy (on bastion), but it doesn't seem to work: there are no errors in the HAproxy logs, but accessing the page http://PublicBastion:9090 just times-out.
Though this is not an answer, most likely it could be due to:
Security group rules: Did you open port 9090 for everyone in Bastion security group?
Is your HAProxy listening on 0.0.0.0 and not on 127.0.0.1?
I have an app which is deployed via Docker on one of our legacy servers and want to deploy it on AWS. All instances reside on the company's private network. Private IP addresses:
My local machine: 10.0.2.15
EC2 instance: 10.110.208.142
If I run nmap 10.110.208.142 from within the Docker container, I see port 443 is open as intended. But I if run that command from another computer on the network, e.g. from my local machine, I see that port is closed.
How do I open that port to the rest of the network? In the EC2 instance, I've tried:
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
and it does not resolve the issue. I've also allowed the appropriate inbound connections on port 443 in my AWS security groups (screenshot below):
Thanks,
You cannot access EC2 instances in your AWS VPC network from your network outside of AWS using private IP addresses of the EC2 instances using the public Internet. This is why EC2 instances can have two types of IP addresses: Public and Private.
If you setup a VPN from your corporate network to your VPC then you will be able to access EC2 instances using private IP addresses. Your network and the AWS VPC network cannot have overlapping networks (at least not without fancier configurations).
You can also assign a public IP address (which can change on stop / restart) or add an Elastic IP address to your EC2 instances and then access them over the public Internet.
In either solution you will also need to configure your security groups to allow access over the desired ports.
Found the issue. I'm using nginx and nginx failed to start, which explains why port 443 appeared to be closed.
In my particular case, nginx failed because I was missing the proper ssl certificate.
I have setup an internet facing classic load balancer and when I provision an EC2 instance with a public IP address the load balancer can do the health check successfully but if I provision an identical instance without a public IP address the health check always fails. Everything is the same apart from not adding a public IP address. Same subnet, security groups, NACL etc.
The health check is TCP 80 ping. I have a web server on all instances and LB is listening on port 80.
Any ideas why it could be failing?
Solved. The instance without a public IP is failing to download and install the web server (httpd) so that is why the TCP 80 ping is failing. To access the web I need to use a NAT gateway or put a public IP on it.
curl -I 80 will show you if your web server is listening on that port.