How to assign multiple public IPs to an AWS EC2 instance? - amazon-web-services

I have an m4.4xlarge instance to which I initially assigned an Elastic IP. The security group of this instance allows SSH access and also allows access to the web app on port 8000.
Now I clicked on the EC2 instance, chose: Actions > Networking > Manage IP addresses. And then I assigned a new private IP.
Then I created a new Elastic IP address and mapped it to the newly assigned private IP of the network interface. Now I can see in the EC2 instance description that Elastic IPs is showing both old and new Elastic IP. But the IPv4 Public IP field is still showing the old IP address only.
While I am still able to SSH to the instance using the old Elastic IP, I am not able to do so using the new Elastic IP. Also, I am not able to access the web app on port 8000 using new Elastic IP. How can I accomplish this ?

Here is the script I wrote for making it work with additional network interface and making the change persistent on RHEL/Centos -
#!/bin/bash
# On AWS With multiple network cards with the default route tables the outbound public traffic keeps going out via the default interface
# This can be tested by running tcpdump on default interface and then sending a ping to the 2nd interface
# The second address will try to send return traffic via the 1st interface
# To fix this need to create a rule to direct traffic from second address through the 2nd network interface card
# Also creating a systemd service that will create the rules and routes on boot and also
# adding to the network.service so the script is also called when starting network
# User inputs
INTERFACE1="eth0"
INTERFACE2="eth1"
IP1=10.0.0.70/32
IP2=10.0.5.179/32
ROUTER1=10.0.0.1
ROUTER2=10.0.5.1
# End of user inputs
if [[ $EUID != "0" ]]
then
echo "ERROR. You need root privileges to run this script"
exit 1
fi
# Create the file that will be called by the systemd service
rm -rf /usr/local/src/routes.sh
cat << EOF > /usr/local/src/routes.sh
#!/bin/bash
# Adding the routes for the 2nd network interface to work correctly
ip route flush tab 1 >/dev/null 2>&1
ip route flush tab 2 >/dev/null 2>&1
ip rule del priority 500 >/dev/null 2>&1
ip rule del priority 600 >/dev/null 2>&1
ip route add default via $ROUTER1 dev $INTERFACE1 tab 1
ip route add default via $ROUTER2 dev $INTERFACE2 tab 2
ip rule add from $IP1 tab 1 priority 500
ip rule add from $IP2 tab 2 priority 600
EOF
chmod a+x /usr/local/src/routes.sh
# End of file with new routes and rules
# Create a new systemd service
rm -rf /etc/systemd/system/multiple-nic.service
cat << EOF > /etc/systemd/system/multiple-nic.service
[Unit]
Description=Configure routing for multiple network interface cards
After=network-online.target network.service
[Service]
ExecStart=/usr/local/src/routes.sh
[Install]
WantedBy=network-online.target network.service
EOF
# End of new systemd service
echo "New systemd service - multiple-nic.service created"
systemctl enable multiple-nic.service
systemctl restart network
echo "Network restarted successfully"

You need config second IP addr in your OS, for example CentOS,if the primary network interface is eth0, then you need add eth0:1 as following:
sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0:1
DEVICE=eth0:1
Type=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=10.0.0.30
PREFIX=24
Then, reboot your EC2 instance, eg. sudo reboot.

Related

Unable to access EC2 instance

I am using AWS and i created Auto scaling launch configuration using shell Script:
#!/bin/sh
curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
After creating this and the load balancer, two instances were created. I then copied the DNS Name and pasted it in browser, but it says:
This site can’t be reached
internal-elb-asg-167368762.us-east-1.elb.amazonaws.com took too long to respond.
Go to http://amazonaws.com/
Search Google for internal elb asg 167368762 east amazonaws
ERR_CONNECTION_TIMED_OUT
EDIT
I followed your steps and it failed.
You have to change this part of the User Data:
#!/bin/sh curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
With this:
#!/bin/sh
curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
Edit: As #john-rotenstein mentioned is not necessary to use sudo.
Also, check this:
You have the correct security groups on EC2 and with your ELB.
Check if you are listening to the port 80 in you ELB.
The port 80 must be opened in your EC2 security group to your ELB security group and the port 80 must be opened worldwide (0.0.0.0) in your ELB security group.
Finally, are you sure that you are not using an internal load balancer right?
Hope it helps you.

Nginx don't handle request if I make request by AWS elastic IP

I have ec2 instance with 2 interfaces, et0 and et1. I assigned to that interfaces 2 elastic IPs. et0 works good, request made to IP assigned to that IP are handled by server with that listen 80 default_server; nginx config. In /etc/nginx/sites-available/default I made config for that second interface et1:
server {
listen 172.31.13.104:80;
#listen [::]:80 default_server;
server_name example2.com;
return 301 http://google.com;
}
If I make request from second aws instance to 172.31.13.104 I receive correct redirect to google. But when I use public elastic search request is pending all time. When I run tcptruck on server on et1, and I make request on my computer to elastic IP, in server I see that request and state of request still show SYN_SENT. What should I do to make nginx work correct ?
Edit:
172.31.13.104 is private IP of et1
I had the same problem and this is how I solved it. You need two ENIs with their own ips, you need to configure them to their respective domain, then you create a config file for each of the two connections.
Below is what I had to do on my rhel server to get it to work.
$ cd /etc/sysconfig/network-scripts
$ sudo cp ifcfg-eth0 ifcfg-eth1
$ sudo vi ifcfg-eth1
-- changed DEVICE="eth0" to DEVICE="eth1" and saved the file
$ sudo vi /etc/rc.local
-- added the following lines and saved the file ip route add default via 172.31.48.1 dev eth0 table 20 ip rule add from 172.ip1 table 20 ip
route add default via 172.31.48.1 dev eth1 table 21 ip rule add from
172.ip2 table 21
-- please replace 172.31.48.1 with your interface's Gateway (you will get this from 'route -n' output)
-- replace 172.ip1 with eth0's private IP address and 172.ip2 with eth1's private IP address (you will get these from 'ifconfig' output)
$ sudo chmod +x /etc/rc.local
After that, please reboot or Stop/Start the instance and once the instance boots up, you will be able to login using either of the EIPs. Once you are logged in, you may verify whether both the interfaces can communicate over the internet by running the following commands:
$ ping -I eth0 google.com (this will ping google.com from interface eth0)
$ ping -I eth1 google.com (this will ping google.com from interface eth1)
You should get ping response from both the pings.
Once you're through this, you'll need to configure IP based virtual hosts in apache [5]. This will let you fetch different contents from different directories for different domain/sub-domain.
Then, you will need to create a resource record sets [6] to route traffic for a subdomain called 'poc.domain.com' to an IP address (eth1's EIP in your case).
Finally, you will need to associate/change security groups [7] of each ENIs (eth0 and eth1) as per your requirement.
Problem resolved by myself. Rules for my IP routes was incorrect, here is tutorial how to config that ip tables https://www.lisenet.com/2014/create-and-attach-a-second-elastic-network-interface-with-eip-to-ec2-vpc-instance/

Setting up username/password authentication with EC2 for mongodb on port 27017

I currently have an EC2 instance that I am using to host my mongodb sever on from port 27017. Previously I had just setup the security group to just use my home IP address to authenticate a TCP connection to port 27017, however I no longer have a static IP. I now have one that changes everyday that I cannot control. Is there a way to create a mongo URI like mongolabs has
mongodb://<username><passs>#<my EC2 IP>:27017/db
that I can use to connect from PyMongo.
There are many, many guides available by searching that describe how to enable MongoDB authentication.
Alternatively, you could create a small script that uses the AWS CLI to update the security group with your current IP address. The script could be run when needed or set to run automatically your computer starts or you log in.
Install AWS CLI on your machine. You should have proper IAM permissions to update the security group. Then you can use below bash script to update your security group with your current IP address.
#!/bin/bash
ip = 'curl -s http://whatismijnip.nl |cut -d " " -f 5'
sleep 5
aws ec2 authorize-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 --cidr $ip/24

Update IP address on start/ reboot of the EC2 instance

I have created an AMI image from an existing EC2 instance, where I have configured my .net application. in the applications web.config file where I have used my private/public IP. When I launch new ec2 instance from AMI, new private/public IP is assigned. how can I update the new private/public IP in my web.config files at the start or reboot of my ec2 instance.
You should create a startup script which will change the IP on each instance/AMI boot.
change-ip-on-startup.sh
#!/bin/bash
# Fetch instance IPs from metadata
INSTANCE_PUBLIC_IP=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
INSTANCE_PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4`
# Use the variables to replace the IP(s)
# sed "s/.../${INSTANCE_PUBLIC_IP}/g" /path/to/web.config
Then use the following reasoning to make the script runned on each instance/AMI :
# Copy the script in the init.d directory and make it executable
cp /home/ec2-user/change-ip-on-startup.sh /etc/init.d/change-ip-on-startup
chmod +x /etc/init.d/change-ip-on-startup
# Load the script on start
ln -s /etc/init.d/change-ip-on-startup /etc/rc3.d/S99change-ip-on-startup
# Emulate a service behaviour
touch /var/lock/subsys/change-ip-on-startup

How to launch an amazon ec2 instance inside VPC using Chef?

This is a question primarily about Chef. When looking into controlling nodes inside Amazon VPC with Chef, I run into some difficulties, mainly that a node that does not have an external IP address is not easily reachable by chef.
I went through the basic tutorial for scenario #2 http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html#Case2_Launch_NAT
However, this this times out:
knife ec2 server create -N app-server-1 -f m1.small -i rails-quick-start.pem -r "role[base]" -G WebServerSG -S rails-quick-start -x ubuntu -s subnet-580d7e30 -y -I ami-073ae46e -Z us-east-1d
What am I doing wrong?
In order for knife to be able to talk to the server you may need to set up a VPN. If your VPC is already connected to your local network via a VPN then it should work but if not you might want to run an OpenVPN server or something similar.
You can also set up servers in two other ways:
Create an EC2 instance and let it boot up. Then run knife bootstrap against it.
Create an EC2 instance with the proper user data and have cloud-init set it up (if you are running say ubuntu with includes cloud-init).
The solution was to setup a tunnel and tunnel the ssh on some port of a publicly visible computer to all the other computers in the cloud. So my load balancer serves http traffic on socket 80, is accessible via socket 22, and uses sockets 2222, 2223, 2224, ... to tunnel ssh to non-public cloud instances. On load balancer (or any public instance) run:
ncat --sh-exec "ncat PRIVATE.SUBNET.IP 22" -l 2222 &
for example:
ncat --sh-exec "ncat 10.0.1.1 22" -l 2222 &
There needs to be a way to associate an Elastic IP to the instance in order to get a public IP for easy access and then do all the bootstrapping and SSH activities through the EIP.