dynamic DNS on Google Cloud? - google-cloud-platform

I have a bunch of servers that I want to start on Google Cloud. I have one static IP that I have reserved, that's my "public" entry point to my system.
But I also need to be able to get to all my other servers directly. I don't really care about what ephemeral IP is assigned to them, but it would be very convenient to be able to refer to them by name (rather than having to copy-and-paste the IP addresses from the console).
I see this answer, but I was hoping that there is a configuration option somewhere for this that does not involve scripting.

The link what you have provided is a comprehensive answer. You can do it in several different ways (like using deployment manager or Cloud Functions), but at the end of the day it's still scripting.
However, if your issue is the changing IP's, you can reserve the IP's and reattach them if the instance gets recreated. ( You only need to pay for unattached reserved IP's.)
However it is questionable, why do you need direct access to all of your instances in a frequent basis other than their public/internal endpoints which could be a LoadBalancer.

Related

How to see which IP address / domain our AWS Lambda requests are being sent from..?

We're using Lambda to submit API requests to various endpoints. Lately we have been getting 403-Forbidden replies from the API endpoint(s) we're using, but it's only happening randomly.
When it pops up it seems to happen for a couple of days and then stops for awhile, but happens again later.
In order to troubleshoot this, the API provider(s) are asking me what IP address / domain we are sending requests from so that they can check their firewall.
I cannot find any report or anything showing me this, which seems unbelievable to me. I do see other threads about setting up VPC with private subnet, which would then use a static IP for all Lambda requests.
We can do that, but is there really no report or log that would show me a list of all the requests we've made and the Ip/domain it came from in the current setup?
Any information on this would be greatly appreciated. Thanks!
I cannot find any report or anything showing me this, which seems unbelievable to me
Lambda exists to let you write functions without thinking about the infrastructure that it's deployed on. It seems completely reasonable to me that it doesn't give you visibility into its public IP. It may not have one.
AWS has the concept of an elastic network interface. This is an entity in the AWS software-defined network that is independent of both the physical hardware running your workload, as well as any potential public IP addresses. For example, in EC2 an ENI is associated with an instance even when it's stopped, and even though it may run on different physical hardware and get a different public IP when it's next started (I've linked to the EC2 docs because that's the best description that I know of, but the same idea applies to Lambda, ECS, and anything else on the AWS network).
If you absolutely need to know what address a particular non-VPC Lambda invocation is using, then I think your only option is to call one of the "what's my IP" APIs. However, there is no guarantee that you'll ever see the same IP address associated with one of your Lambdas in the future.
As people have noted in the comments, the best solution is to run your Lambdas in a private subnet in your VPC, with a NAT and Elastic IP to guarantee that they always appear to be using the same public IP.

Switching between on-prem and cloud server IPs without load balancing

I own a something.com domain and want to switch between an old on-premises server to a new Google Cloud VM. I can do that by changing the A record under DNS settings. If the new server fails I need to be able to switch back to the old server.
The problem with using A records is that DNS doesn't propagate fast even if you use Cloudflare. Google Chrome in particular sticks to its DNS table like crazy and if it first learned that something.com resolves to X.X.X.X it will not let go of it.
I need to be able to direct all traffic going to the Google Cloud static IP back to the old server's IP. I'm looking to find a proxy/routing rule menu that I can use to apply - not a full blown load-balancing menu that will cost extra per month.
The solution is to get rid of the old server and build a more robust solution on GCP. There are multiple ways to do this, but one obvious way is to use a Managed Instance Group (https://cloud.google.com/compute/docs/instance-groups). MIGs can be configured to be autohealing (https://cloud.google.com/compute/docs/tutorials/high-availability-autohealing) and autoscaling (if needed).
In this case you should be particularly looking at stateful MIGs I guess (https://cloud.google.com/compute/docs/instance-groups/stateful-migs).
You have two solutions to switch your DNS from an IP to another one dynamically
Either you use a DNS failover service, not proposed on GCP today. Use a low TTL in your DNS definition, else your will wait a lot before the automatic switch.
Or you implement it by yourselves with a proxy server that you have to manage.

How to find EC2 availability zone that's closest to a specific website

I want to find the EC2 availability zone (e.g. eu-central-1) that's closest to a single specific website (e.g., www.stackoverflow.com).
In theory I can achieve this by manually starting up an instance in each availability zone, and doing ping to the website from inside ssh. However this is too cumbersome and expensive. Is there an automated way to do this?
I know of services [1][2] that allow you do to this when using your own IP address, but none that allows the use of a specific other website (e.g. www.stackoverflow.com).
[1] https://cloudharmony.com/speedtest#for-aws
[2] https://ping.psa.fun/
Can you be specific on what is exact use case?
Btw, The naming mapping is here in doc and an AZ should provide you same latency and availability.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

Does my Cloud Function "own" its IP address while it is running?

Can I assume that while my cloud function is running, no other cloud function (that is also currently running) also has the same IP address? In other words, do I "own" the IP address of the cloud function during the time in which it is running?
My guess is no, since it would just cost Google more money to do that without much benefit for 95% of users, but I couldn't find any info on this anywhere, hence this question.
If my intuition is correct, then perhaps the only way to be sure that my function has a unique IP is to assign it a static IP? As of writing, static IPs for Cloud Functions are apparently in beta.
Currently, as the product stands, you can not assume that if you make an outgoing request from Cloud Functions that it will appear to come from an IP address, with no other outgoing traffic from any other functions appearing to come from it. As you've seen in the other question, there are blocks of addresses that Google owns, and the traffic could appear to come from anywhere within those blocks, depending on the region of deployment and other factors. You can expect that there are going to be far more Cloud Functions deployed for all projects for all customers running concurrently than there are specific IPs within those blocks. So you should not make any assumptions about the IP of origination. It could change at any time, and any function's or project's traffic may appear to come from it.
If this situation changes due to additional features offered by Cloud Functions, you might get a different set of guarantees, but it's not clear what those are without being in this beta program.
Doug is right. There is any guaranty of the IP address. And I don't hear about any alpha/beta program with static public IP.
However, there is an beta program called vpc connector, in networks section in the console, which allows you to define a small range of IP (cidr /28) to be used by function to enter in the VPC of your project. You can then set up all the route and the firewall rules that you want with this range in your VPC.
Finally, about the early Access mentioned in the link, and which shouldn't be public, it's not exactly that. Stay tuned.

Having specific website access on specific EC2 instance under ELB

I wanted to know if there is an option in Amazon Web Services, two have two EC2 instances running, and me, as a developer, being able to have a direct access to one of my choice when both servers serve under the same domain.
By access, I mean regular access to the website via a web browser (e.g. www.domain.com/some-post/)
I want my site to continue be up and live. I currently have a single EC2 server that servers under www.domain.com. If I add another server via Elastic Load Balancer,I don't have control over which server the load balancer sends me.
I have a Wordpress site which I want to upgrade its theme, plugins and the core files, so I want only me to have access to that server and test it out. I could open a server and test it on a public ip, I did it, and it doesn't work as expected, so I need to run it under the original address to make sure that if it runs OK like that, it will run OK live.
The only way that I thought about doing it is to create an image of the server, create an EC2 instance, use a different domain name, restrict access to the server to my IP address, change in the DB to the new domain name, than after everything works, change the domain back to original and make the Elastic IP point to the new server.
No you can't achieve this behavior with an ELB. This would totally defeat the purpose of an ELB - who's purpose is to evenly distribute traffic amongst the instances associated with it.
By the sounds of it, you're looking for a testing stage that you can use to test out new updates etc without damaging the live site.
You could always set up a DNS name for your domain for your testing stage - eg."alpha.mysite.com".
It's quite common practice to use environment variables for use cases like this. You might have an environment variable set on machines that on prod could be eg: stage=prod and on your testing stage could be stage=test. Then in your code, you can get this environment variable an do something different depending on what stage the code is running on. For example, use the prod/development database.
It might be an idea to start using Code Deploy for pushing your code. This way, you can have deployment hooks set up your environment on each instance - install dependencies, load the code, start the application etc. And then using the environment variables already on the instances being deployed to, your code will do the correct thing.
I suppose you could put the test stage on a different port on your prod machines and that way you could use the same domain, but this would be a really bad idea. I think to get a safe, fault tolerant and scalable solution, you're going to need an additional DNS name. And you most certainly shouldn't use the same ELB. If you want to test load balancing for your test application, you should use an additional ELB.
In fact some people even go the lengths of using different AWS accounts for managing test environments.
You might also be interested in Code Pipeline to help you with this.
If I understand correctly, you run multiple instances behind a single ELB and want to be able to access one of the instances to test upgrades. I assume that, while performance and testing the upgrade, you don't want other users to access that instance.
I can think of a few ways to accomplish this. Here are two practical ones:
1. Remove the instance from the load balancer using the AWS console or CLI. No requests to the ELB will go to this instance.
Access the instance you want to upgrade directly on it's own address. For this, the security group on the instance must be configured to allow HTTP connections from the outsite. You could allow only access from your own IP and the load balancer, for example.
2. Create another ELB for test purposes. Make sure that the instance you're upgrading only responds to the test ELB, not to the production ELB. Two ways to accomplish this: either remove it from the production ELB manually, or make the ELB health check on the instance fail. (in the latter case, you would need different healthchecks for the test and production elb).
My advice: when costs are an issue, go for option one. When the additional costs of an extra ELB is not an issue, go for option 2, manually remove the instance from the production ELB while upgrading, and re-attach it when done and tested.
Update (i realized i didn't answer your question completely): for this to work without changing the domain in your database, you would need to point the machine you're testing from to the right host.
Two options:
1. When going for the direct http connection to the instance, make sure that the instance has an external ip. Put the domain in your hosts file and point it to the ip.
2. When going for an extra test elb, either point the domain in your hosts file to one of the ELB ip's, or run a local dns server that has a record for the domain with a CNAME to the ELB hostname.
Although verifying the correct upgrade of a single production node is a valid use case, in this case you're probably better off creating a separate test environment on a different domain.
This way, you can test all changes/upgrades in isolation.
For best results, you would need to periodically transfer the database from production to the test environment. You could write a database scripts that automatically changes the domain in the database so you can (partially or fully) automate the production-to-test-database-restore process.