How to see which IP address / domain our AWS Lambda requests are being sent from..? - amazon-web-services

We're using Lambda to submit API requests to various endpoints. Lately we have been getting 403-Forbidden replies from the API endpoint(s) we're using, but it's only happening randomly.
When it pops up it seems to happen for a couple of days and then stops for awhile, but happens again later.
In order to troubleshoot this, the API provider(s) are asking me what IP address / domain we are sending requests from so that they can check their firewall.
I cannot find any report or anything showing me this, which seems unbelievable to me. I do see other threads about setting up VPC with private subnet, which would then use a static IP for all Lambda requests.
We can do that, but is there really no report or log that would show me a list of all the requests we've made and the Ip/domain it came from in the current setup?
Any information on this would be greatly appreciated. Thanks!

I cannot find any report or anything showing me this, which seems unbelievable to me
Lambda exists to let you write functions without thinking about the infrastructure that it's deployed on. It seems completely reasonable to me that it doesn't give you visibility into its public IP. It may not have one.
AWS has the concept of an elastic network interface. This is an entity in the AWS software-defined network that is independent of both the physical hardware running your workload, as well as any potential public IP addresses. For example, in EC2 an ENI is associated with an instance even when it's stopped, and even though it may run on different physical hardware and get a different public IP when it's next started (I've linked to the EC2 docs because that's the best description that I know of, but the same idea applies to Lambda, ECS, and anything else on the AWS network).
If you absolutely need to know what address a particular non-VPC Lambda invocation is using, then I think your only option is to call one of the "what's my IP" APIs. However, there is no guarantee that you'll ever see the same IP address associated with one of your Lambdas in the future.
As people have noted in the comments, the best solution is to run your Lambdas in a private subnet in your VPC, with a NAT and Elastic IP to guarantee that they always appear to be using the same public IP.

Related

Reaching GCP Cloud Run instance through VPC with "only internal range" egress

The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service

Fixed IP address for service behind aws application load balancer

our company just moved to a new office and therefore also got new network equipment. Es it turns out, our new firewall does not allow pushing routes over VPN that it first has to look up ip addresses for.
As we all know, amazon aws does not allow static ip addresses for its application load balancer.
So our idea was to simply put a network load balancer in front of the application load balancer (there is a pretty hacky way described by aws itself (https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/) that seemed to work fine (even if I don't really like the approach with the lambda script registering and deregistering targets)
So here is our problem: as it turns out, the application load balancer only gets to see the network load balancers ip address. This prevents us to use security groups for ip whitelisting which we do quite heavily. On top of that some of our applications (Nginx/PHP based) also do ip address verification and the alb used to pass the clients ip address as an x-forwarded-for header. Now our application only sees the one from the nlb.
We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve.
So how did you guys solve this problem ?
Thankful for any help :)
Greetings
You could get the list of AWS IP addresses for the region your ALB is located, and allow for them in your firewall. They do publish the list and you can filter through it https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
I haven't done this myself and I'm unsure if the addresses for ALB are included under the EC2 category of you would take the whole of AMAZON service "to be safe".
Can you expand on this? "We know of the possibility to use the global accelerator but that is a heavy investment as we don't really need what the GA is trying to solve."
GA should give you better, more consistent performance, especially if your office is far away from the AWS Region where the ALB is running

Does my Cloud Function "own" its IP address while it is running?

Can I assume that while my cloud function is running, no other cloud function (that is also currently running) also has the same IP address? In other words, do I "own" the IP address of the cloud function during the time in which it is running?
My guess is no, since it would just cost Google more money to do that without much benefit for 95% of users, but I couldn't find any info on this anywhere, hence this question.
If my intuition is correct, then perhaps the only way to be sure that my function has a unique IP is to assign it a static IP? As of writing, static IPs for Cloud Functions are apparently in beta.
Currently, as the product stands, you can not assume that if you make an outgoing request from Cloud Functions that it will appear to come from an IP address, with no other outgoing traffic from any other functions appearing to come from it. As you've seen in the other question, there are blocks of addresses that Google owns, and the traffic could appear to come from anywhere within those blocks, depending on the region of deployment and other factors. You can expect that there are going to be far more Cloud Functions deployed for all projects for all customers running concurrently than there are specific IPs within those blocks. So you should not make any assumptions about the IP of origination. It could change at any time, and any function's or project's traffic may appear to come from it.
If this situation changes due to additional features offered by Cloud Functions, you might get a different set of guarantees, but it's not clear what those are without being in this beta program.
Doug is right. There is any guaranty of the IP address. And I don't hear about any alpha/beta program with static public IP.
However, there is an beta program called vpc connector, in networks section in the console, which allows you to define a small range of IP (cidr /28) to be used by function to enter in the VPC of your project. You can then set up all the route and the firewall rules that you want with this range in your VPC.
Finally, about the early Access mentioned in the link, and which shouldn't be public, it's not exactly that. Stay tuned.

AWS Best practice - When external ip address on stop/start

Here's what's bothering me. Is there a better way than sending emails to devs that the ip address for their dev server has changed after the instance is stopped and started?
I was thinking of a single small instance that has an elastic ip which the devs can log in using terminal, and ssh again to the internal ip address of the dev server. Is that effective?
Does it mean that the devs need to be informed of the change every time?
It's unclear exactly what you are saying "there's a new public dns for the server"? -thanks for the comment, that's clearer what you mean! It's the aws domain name in the format "ec2-54-222-213-143.eu-west-1.compute.amazonaws.com" you are referring to
You are asking how can these name/address changes be managed?
Generally speaking for fixing these kinds of problems there are a couple of things to be aware of
Firstly, if it is the public ip address that is changing instead of an ephemeral public ip address use an elastic ip. This will stay the same and can be transferred from an old instance to a new instance. Please read http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html about the differences between "Elastic IP" and normal public IP addresses on AWS
Secondly, if you are concerned about maintenance of the dns records that map the ip addresses to the domain names then it is possible to automate the updates to aws route53. I have used the aws cli command "route53 change-resource-record-sets" for this and also CloudFormation
Automating events to occur on instance start up does take a little research of the available APIs and hooks for example see this answer with a simple use of cloud-init Using cloud-init user data

Is there any way to turn a non-elastic IP into an elastic IP on aws?

I have done some research and don't think it is possible but figured I would ask on here just to be sure.
My predecessor decided to use the public and private IP of one of our database servers in an extremely large amount of places, now that we are going to be resizing this DB server going through and changing all of those IPs over would take a large amount of time and the possibility of missing one is pretty high.
I am wondering if it is at all possible to take the current IP on the server ( which is not elastic ) and some how convert it to an elastic IP. To clarify I am not looking to add a new elastic IP to the server but rather take the IP that is currently assigned to it and make that elastic. If this is not something that I can do using the SDK / Console is it something that Amazon could do behind the scenes if we were to get support?
Thanks !
No, it is not possible.
The Elastic IP addresses are a separate pool from the Public IP addresses. There is no public means to convert a public (or private) IP address to an Elastic IP.
Standard Amazon support is unlikely to be able to make such a switch for you. While technically an Amazon network engineer can probably make such a switch, it is very unlikely that support could make that happen.
If this is not something that I can do using the SDK / Console is it something that Amazon could do
behind the scenes if we were to get support?
Amazon can create a reverse DNS record for a mail server manually and is known to implement features that users request, so I guess it might be worth asking. I would give it a try.
So long as you do not terminate the instance, its static IP should remain assigned to it per Amazon documentation (https://aws.amazon.com/articles/1346).
now that we are going to be resizing this DB server
You can resize the instance and keep its static IP without terminating it (and thus without losing the static IP). The moment you terminate that instance, you lose the static IP, so resize it without terminating it.