I have an Amazon EC2 Instance running Ubuntu server 16.04
I want the EC2 to have two network interfaces. After configuring the secondary interface. I cannot ping my primary interface from my other instances. This is my configuration of the /etc/network/interfaces.d/51-secondary.cfg. I have enable to allow all traffic on the EC2 as well.
My interfaces are ens5 the primary and ens6 the secondary. My primary IP is 172.31.0.67 and the secondary is 172.31.6.43. I want my other EC2s to communicate with both my IP address. Is it possible what did I do wrong here.
auto ens6
iface ens6 inet static
address 172.31.6.43
netmask 255.255.240.0
# Gateway configuration
up ip route add default via 172.31.0.1 dev ens6 table 1000
# Routes and rules
up ip route add 172.31.6.43 dev ens6 table 1000
up ip rule add from 172.31.26.168 lookup 1000
Related
I am always facing the below problem......
If I create a new RHEL (or any Linux) EC2 Instance without any modification of default VPC Setting or Network ACL or Route Table (RT is open for outbound 0.0.0.0/0 and connected to default IGW):
SSH will only work from my machine if I select 0.0.0.0/0.
In security Group In bound rule is I add My IP. SSH does not work.
Note:
The Public IP is current. 103.75.162.205
AWS provided CIDR in My IP [103.75.162.202/32] contains my Public IP.
So technically My IP should work, but it is not working or I never made it to work. What I am missing?
I recommend:
Open 0.0.0.0/0
Connect via SSH
Disconnect
Connect again: The instance will show the IP address from which you most recently connected
Use this displayed IP address in the Security Group
Sometimes corporate networks route HTTP traffic differently than SSH traffic due to proxies. The above steps will help you discover the address being used for SSH traffic.
My Public IP is: 103.75.162.202
I cannot use My IP option in Security Group, it always gives /32 and
I cannot change it to /24 or others... Need to use Custom IP
103.75.162.202/24. Tried with 103.75.162.202/31 it does not work...
Next I realized in a CIDR initial IP is used by AWS, hence, I changed
my CIDR to 103.75.162.198/24 and now it is working, both SSH and
Apache HTTPD
I cannot ping my EC2 instance with which has a public IP associated with it. Before posting here, I read Cannot ping AWS EC2 instance. It didn't help:
Here's how I have things set up:
I created a new Amazon Linux t2.micro instance using all the defaults.
After creation, it didn't have an IPv4 Public IP in the EC2 | INSTANCES | Instances.
So I went to EC2 | NETWORK & SECURITY | Elastic IPs, and clicked the Allocate Elastic IP address button. After the Public IPv4 address column showed an address, I clicked Actions | Associate Elastic IP address.
I went back to EC2 | INSTANCES | Instances, and the IPv4 Public IP column shows the address I just created.
Still cannot ping.
So I went to EC2 | NETWORK & SECURITY | Security Groups, clicked the link for the security group associated with the instance and added an inbound and outbound rule like so:
All traffic All All 0.0.0.0/0
All ICMP - IPv4 ICMP All 0.0.0.0/0
Still cannot ping.
So I went to VPC | Internet Gateways, clicked the Create internet gateway button, selected the defaults, and then attached the internet gateway to the VPC which is associated with the instance.
Still cannot ping.
So I went to VPC | SECURITY | Network ACLs, Edit Inbound and Edit Outbound rules. This is what I have for both:
Rule # Type Protocol Port Range Source Allow / Deny
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
101 All ICMP - IPv4 ICMP (1) ALL 0.0.0.0/0 ALLOW
Still cannot ping.
What else is missing to be able to ping? Yes, I can ping other hosts on my network... just not to AWS and the public IP address listed for that EC2 instance.
First, it is worth mentioning that there should generally be no need to every modify the Network ACLs. They can be used for special purposes (eg creating a network DMZ), but otherwise just leave them at their default values.
I should also mention that using PING generally isn't worthwhile because it can be blocked by many network configurations. Rather than trying to get Ping to work, you should try to get whatever it is that you actually want to work, to work. For example, if you wish to SSH into the instance or use it as a web server, try to get them working rather than Ping.
Here are the things that would be necessary to get PING to work:
The EC2 instance is launched in a public subnet. This is defined as:
A subnet that has a Route Table entry that directs 0.0.0.0/0 to an Internet Gateway (You did not mention the Route Table in your Question.)
A public IP address associated with the instance (either at launch, or by adding an Elastic IP address afterwards, as you did)
A security group that permits inbound ICMP traffic from your address (or wider, such as 0.0.0.0/0)
An operating system on the instance that is configured to respond to PINGs (this will typically be on by default, but it is the OS that responds to the request)
A network from which you request the Ping that also permits such traffic to flow. (Some corporate networks block such traffic, so you could try it from an alternate network such as home, work or via a tethered phone.)
So, based on the information you have provided, you should confirm that the subnet has a Route Table that points to the Internet Gateway.
Go to Network ACL, add inbound rule for ICMP IPv4 - allow 0.0.0.0
Go to Security Group. Pick the SG name you created for your EC2 instance (mine is launch-wizard-1). Add inbound rule for ICMP IPv4 - allow 0.0.0.0
Vwa-lah, I can ping.
Note: I'm using Amazon Linux (free tier t2.micro)
I have an app which is deployed via Docker on one of our legacy servers and want to deploy it on AWS. All instances reside on the company's private network. Private IP addresses:
My local machine: 10.0.2.15
EC2 instance: 10.110.208.142
If I run nmap 10.110.208.142 from within the Docker container, I see port 443 is open as intended. But I if run that command from another computer on the network, e.g. from my local machine, I see that port is closed.
How do I open that port to the rest of the network? In the EC2 instance, I've tried:
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
and it does not resolve the issue. I've also allowed the appropriate inbound connections on port 443 in my AWS security groups (screenshot below):
Thanks,
You cannot access EC2 instances in your AWS VPC network from your network outside of AWS using private IP addresses of the EC2 instances using the public Internet. This is why EC2 instances can have two types of IP addresses: Public and Private.
If you setup a VPN from your corporate network to your VPC then you will be able to access EC2 instances using private IP addresses. Your network and the AWS VPC network cannot have overlapping networks (at least not without fancier configurations).
You can also assign a public IP address (which can change on stop / restart) or add an Elastic IP address to your EC2 instances and then access them over the public Internet.
In either solution you will also need to configure your security groups to allow access over the desired ports.
Found the issue. I'm using nginx and nginx failed to start, which explains why port 443 appeared to be closed.
In my particular case, nginx failed because I was missing the proper ssl certificate.
We need to run two different applications on same instance on same port 80.
So I need suggestion to achieve this ?
Use your DNS provider to map two host records to the same elastic IP and configure your virtual hosts in Apache to route the traffic based on host name.
Assign a second Elastic Network Interface to your EC2 server. This will give you the capability of having a second public IP address (or Elastic IP) associated with your server. Then configure one application to bind to port 80 on the first ENI, and configure the second application to bind to port 80 on the second ENI.
I have an Amazon AWS EC2 Ubuntu 14.0.4 server using squid proxy.
I have managed to configure the the default IP address as a proxy without an issue.
The problem is when I attach additional elastic IP addresses I cannot get them to work.
So I have an EC2 server with 2x network interfaces, both with a private and public IP addresses (one with the default public IP and another with the elastic IP). Both network interfaces are attached to the same security group with my desired proxy ports open.
Within my EC2 instance, I can see eth0 and eth1 by performing an ifconfig.
I cannot even SSH in on the elastic IP.
Within Ubuntu, I can route print and it shows eth0 and eth1 using the same default gateway. I assume this is not correct?
I think I might be missing some routing settings configured in the VPC section.
This is an example of my squid config file.
acl tasty3128 myportname 3128 src 172.X.X.X/24
http_access allow tasty3128
tcp_outgoing_address 67.xxx.108.128 tasty3128
acl tasty3129 myportname 3129 src 172.X.X.X/24
http_access allow tasty3129
tcp_outgoing_address 67.xxx.108.79 tasty3129
thankyou