Fixed IP address for service behind aws application load balancer - amazon-web-services

our company just moved to a new office and therefore also got new network equipment. Es it turns out, our new firewall does not allow pushing routes over VPN that it first has to look up ip addresses for.
As we all know, amazon aws does not allow static ip addresses for its application load balancer.
So our idea was to simply put a network load balancer in front of the application load balancer (there is a pretty hacky way described by aws itself (https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/) that seemed to work fine (even if I don't really like the approach with the lambda script registering and deregistering targets)
So here is our problem: as it turns out, the application load balancer only gets to see the network load balancers ip address. This prevents us to use security groups for ip whitelisting which we do quite heavily. On top of that some of our applications (Nginx/PHP based) also do ip address verification and the alb used to pass the clients ip address as an x-forwarded-for header. Now our application only sees the one from the nlb.
We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve.
So how did you guys solve this problem ?
Thankful for any help :)
Greetings

You could get the list of AWS IP addresses for the region your ALB is located, and allow for them in your firewall. They do publish the list and you can filter through it https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
I haven't done this myself and I'm unsure if the addresses for ALB are included under the EC2 category of you would take the whole of AMAZON service "to be safe".

Can you expand on this? "We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve."
GA should give you better, more consistent performance, especially if your office is far away from the AWS Region where the ALB is running

Related

Static IP to access GCP Machine Learning APIs via gRPC stream over HTTP/2

We're living behind a corporate proxy/firewall, that can only consume static IP rules and not FQDNs.
For our project, we need to access Google Speech To Text API: https://speech.googleapis.com. If outside of corporate network, we use gRPC stream over HTTP/2 to do that.
The ideal scenario looks like:
Corporate network -> static IP in GCP -> forwarded gRPC stream to speech.googleapis.com
What we have tried is creating a global static external IP, but failed when configuring the Load Balancer, as it can only connect to VMs and not APIs.
Alternatively, we were thinking to use output of nslookup speech.googleapis.com IP address ranges and update it daily, though it seems pretty 'dirty'.
I'm aware we can configure a compute engine resource / VM and forward the traffic, but this really doesn't seem like an elegant solution either. Preferably, we can achieve that with existing GCP networking components.
Many thanks for any pointers!
Google does not publish a CIDR block for you to use. You will have daily grief trying to whitelist IP addresses. Most of Google's API services are fronted by the Global Frontend (GFE). This uses HTTP Host headers to route traffic and not IP addresses, which will cause routing to fail.
Trying to lookup the IP addresses can be an issue. DNS does not have to return all IP addresses for name resolution in every call. This means that a DNS lookup might return one set of addresses now and a different set an hour from how. This is an edge example of grief you will cause yourself with whitelisting IP addresses.
Solution: Talk to your firewall vendor.
Found a solution thanks to clever networking engineers from Google, posting here for future reference:
You can use a CNAME in your internal DNS to point *.googleapis.com to private.googleapis.com. This record in public DNS points to two public IP addresses (199.36.153.8/30) that are not reachable from the public internet but through a VPN tunnel or Cloud interconnect only.
So if setting up a VPN tunnel to a project in GCP is possible (and it should be quite easy, see https://cloud.google.com/vpn/docs/how-to/creating-static-vpns), then this should solve the problem.

How can I forward a regional IP to a global forwarding IP?

Tonight, my client is going to be on a high-profile television show to pitch his business. I created their API and had it running on a small server on Google Cloud Platform with a static IP on the instance since that was all that we needed.
Now I am trying to scale it for the inevitable traffic, I'm moving to a load balancer and multiple, scalable instances. I thought I could use the IP address from the instance and transfer it to the load balancer. But the load balancer requires a global forwarding IP, and the IP address of the instance is only regional.
For some reason, the mobile developers hardcoded their URLs to the IP address and not the domain name. It's too late in the day for them to resubmit the app code, so I need a way to forward the regional IP to the global forwarding IP that the load balancer takes.
Could I do this through Google Cloud Platform? Or should I set this up through the domain name provider?
I realize that this may break some rules on SO, but I only need the answer for today, the question can come down tomorrow if it does break rules.
Your best shot today may be to increase the memory/cpu of the current machine type and/or use something like Nginx to proxy requests from the instance to the load balanced fleet.
It is possible to use nginx as a very efficient HTTP load balancer to
distribute traffic to several application servers and to improve
performance, scalability and reliability of web applications with
nginx.
I would do both: increase instance capacity and try an Nginx proxy on that instance. You will still have a single point of failure, but would be able to handle greater capacity.
Essentially this configuration will forward requests from the instance (the regional ip) to your GCP load balancer (the global ip)

AWS EC2 - How to upgrade instance without changing existing public IP?

Is it possible to upgrade EC2 instance without changing existing public IP address? My mobile application is live and unfortunately we didn't use elastic IP in web services. So if I upgrade current instance, it will generate new public IP and the old application users won't be able to use mobile application.
Is there any way to keep current IP as it is? Or any other way to upgrade it without loosing existing users? Please suggest.
Consider this a lesson as to why you should use a load balancer and a DNS entry, especially for anything public-facing. What were you going to do if you the instance failed? Or the availability zone went down?
Personally I would spin up a set new larger instances behind a load balancer, create a Route53 DNS entry that points to the load balancer, and then release an update to the client that points to the DNS entry. As clients update, traffic will gradually move to the load balancer. The undersized single instance's load will drop, so if it is overloaded it will eventually return to normal. Eventually you can kill the old instance when all/most clients have upgraded.
It depends on what sort of software you are running exactly
If you have an application that is sessionless then it would be simple to bring up another server on a different IP and then use route53 to switch over the traffic, with both servers running at the same time
If the application is stateful though and if it stores the sessions locally on the host then that's more of a problem
One possible approach is to bind an elastic ip to the running host, reconfigure your software to listen on all addresses ( a lot of configuration controls allow this with an address of 0.0.0.0 ) then change DNS and gradually see the traffic migrate to the elastic IP, while both addresses work
Once the new address is fully in use (depends on your TTL) it becomes much easier to switch to a new host by reassigning the EIP

Setting up a loadbalancer behind a proxy server on Google Cloud Compute engine

I am looking to build a scalable REST webservice on the Google Cloud Compute Engine but have a couple of requirements that I am not sure how best to implement.
Structure so far:
2 Instances running a REST webservice connected to a MySQL Cloud database.
(number of instances to scale up in the future)
Load balancer to split request between the two or more Instances.
this part is fine.
What I need next is that the traffic (POST requests from instances to an external webservice) must come from a single IP address. I assume these requests can not route back through the public IP of the load balancer?
I get the impression the solution to this is to route all requests from instances though a 3rd instance running squid. Is this the best way to do this? (side question)
Now to my main question:
I have been reading about ApiAxle which sounds like a nice proxy for Web Services, giving some good access control, throttling and reporting capabilities.
Can I have an instance running ApiAxle followed by a google cloud Load Balancer which shares the request from the proxy to the backend instances that do the leg work and feed the response back through the ApiAxle proxy, thus having everything though a single IP visible to clients using the API? (letting me add new instances to the pool to add capacity.)
and Would the proxy be much of a bottle neck?
Thanks in advance.
/Dave
(new to this, so sorry if its a stupid question because I cant find anything like this on the web)
Sounds like you need to NAT on your outbound traffic so it appears to come from one IP address. You need to do that via a third instance since Google LB stack doesn't provide this. GCLB works only with inbound connections on the load-balanced IP.
You can setup source-NAT using advanced routing, or you can use a proxy as you suggested.

Possible solution to ELB lacking A record support?

Hey guys I was wondering if this seems like a viable solution to the age old problem of Amazon Elastic Load Balancer's lacking a dedicated IP, and thus A record support.
What if I created a micro/small instance and hooked it to an elastic IP. I can then use that IP as my A record address for my website. That instance will forward 100% of its traffic to an ELB load balancer address (Haproxy?), which will then operate normally and forward that traffic to my server pool.
With this architecture I can use my A-record and an ELB.
Are there any downsides to this aside from the cost of the initial instance that forwards its traffic to the ELB?
Will this double forwarding create too much lag or is it really negligible since they're all in AWS?
Thanks for feedback.
If you are currently using Route53 for you DNS, it does have support for handling zone apex.
https://forums.aws.amazon.com/message.jspa?messageID=260459
Not sure if this answers your question since you didn't mention why you need a dedicated ip.
Are there any downsides to this aside from the cost of the initial instance that forwards its traffic to the ELB?
Er, yes. You're loosing about 99.9% of the benefits of ELB.
Will this double forwarding create too much lag or is it really negligible since they're all in AWS?
No, the lag should be small (sub-milisecond). The two main problems are:
1) Your instance will become a bottleneck when your traffic increases. You won't be able to survive a sudden rush, such as being linked from a high-traffic website like Slashdot or Oprah.
The whole point of ELB is that they can manage scaling (the frontend and the backend) for you. If you insert a single box in the flow, it kinda prevents ELB from doing anything useful.
Also, the micro instance can take very little traffic. You have to go to at least a m1.large if you won't want your network packets throttled.
2) Your instance will become a Single Point Of Failure. When your box dies, your website will be down. ELB can prevent problems on both the front and backend with redundancy.
Perhaps if you explained why you needed an A record?
(It is also possible to run your own front-end(s): Just create a box with an EIP, and put nginx and/or HAProxy on it. But as with everything, there are trade-offs.)