Openstack: floating ip pool per tenant in neutron - openstack-neutron

On our platform, we have created two neutron external network with two different subnets.
Is it possible to make a floating ip pool not visible from all tenants ?

Related

What is the maximum number of external IP for one Google Cloud Instance

I want to use Google Cloud instance as the VPN server with multiple external IP addresses.
What is the maximum number of external IPs I can use for one Google Cloud instance? (In documentation it is mentioned that "The maximum number of network interfaces per instance is 8" but I'm not sure is it mean I have a limit of 8 IP per instance or 8 subnets with lot of IPs )
Also, this is probably the dumbest question (I'm totally new to cloud computing area) but if for example, one external IP of the instance is 1.1.1.1 Does it mean I can connect to instance from internet by this IP as well as if some software run on the instance and connect another server it will show in log that connection was from 1.1.1.1 ?
A compute engine can have multiple network interfaces. Each network interface can have BOTH an internal and external IP address. This means that if there is limit of 8 network interfaces, you can only have 8 external IP addresses.
(Source: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address)
It is my understanding that if you have an internal IP address associated with a network interface of a Compute Engine (say 1.1.1.1) and then associate that interface with an external IP address then any traffic arriving at the compute engine through the external IP address will appear (to the Compute Engine) will log as though it had been sent to the internal IP address.

Multi Availability Zone VIP in AWS

I am trying to setup a VIP (virtual IP setup) - for a high availability HA setup for Redis with HA proxy, however to have a VIP for two HA instances in two different availability is proving to be difficult.
I was trying to follow this guide; https://aws.amazon.com/articles/2127188135977316 here, however this uses the same availability zone to achieve VIP floating IP, and this wouldn't work for me as my different availability zones are in different subnets.
I have tried the above mentioned example with an Elastic IP; however it is not transferring between machines as expected with the script (http://media.amazonwebservices.com/articles/vip_monitor_files/vip_monitor.sh)
Please could someone guide me how to approach this ?
The Leveraging Multiple IP Addresses for Virtual IP Address Fail-over in 6 Simple Steps article you reference is over 3 years old, so I wouldn't recommend this as a state-of-the-art method of doing failover.
The preferred method for HA is always to load balance between servers in multiple Availability Zones. Then, if one server or one AZ should fail, the other systems can take the full load of traffic (perhaps scaling-up to absorb the traffic).
For a requirement where only one server can be active at a time, switching DNS names or Elastic IP addresses would be recommended.
Option 1: Use Route 53 Health Checks to detect failure, then route the DNS name to an alternate server (may involve waiting until TTL timeout for any cached DNS resolutions)
Option 2: Use a static Elastic IP address and reassign it to an alternate server. This would involve some method to detect failure (eg the script in that article) and then an API request to associate the Elastic IP address to another server

Single public IP for service but failover distribution to two internal IP's

can anyone tell me if it is possible to set up failover in azure with single public address for the service?
meaning that if I want the public ip address for a service always to remain the same but have it distributed to two internal ip addresses depending on failover or priority status of these two internal IP's? Is this possible? If so how?
Yes it is possible, you can front-end your two internal ip addresses with a load balancer which will have public IP.
https://azure.microsoft.com/en-us/documentation/articles/load-balancer-overview/

Do I need an internal ELB if I already have a public ELB pointing to the same location?

Bit of an odd question about AWS + ELBs.
We have a VPC that contains public and private subnets. Within the private subnets, we have 2 applications (application 1 and application 2) deployed using ASGs, and each is reachable by it's own public ELB.
Application 1 also needs to talk to application 2, one is a website and the other is an API service. I was just wondering if I needed to setup an internal ELB for application 2 given that I already have a public ELB for it?
If it makes a difference, all the instances communicate with the outside world using a NAT. Is AWS clever enough to route the traffic internally, or will it go out and back in? If the latter, it definitely feels like I should add an internal ELB.
Cheers
AWS will not do anything in this case to optimize the routing. To do so would require either manipulating the DNS responses into private addresses or defeating/bypassing your routing table configuration, neither of which would probably be desirable in many cases. It would also have implications for security groups.
Using an external ELB from inside, the traffic will go out the NAT instance and hit a public IP of the external load balancer. Additionally, you'll pay for that traffic to leave the network and come back, at $0.01 per gigabyte transferred, billed against each side of the connection (that is, the NAT instance and the ELB would both be billed $0.01 for the same gigabyte of data transferred between them = $0.02/GB) in most configurations.
http://aws.amazon.com/ec2/pricing/

use one common public ip address for multiple ec2 instances

I'm using aws ec2 instances as web servers.
There are more then 20 web servers and they has to connect to some external services. Those external services has ip based security rules. Because of this reason I have to use a only one or two public ip address to connect those services.
How can i route outgoing traffics to use only one public ip address ?
Yes, you could use a NAT instance for that.
Just make sure your instance is large enough to accommodate the desired throughput.
See: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html