My company has an Azure function app that accesses a service on an AWS EC2 instance.
We use an AWS security group to only allow access to the service's port by the 7 possible outbound IP addresses used by the Azure function app. We find these listed in Azure portal.
However when the function app "scales" apparently it can be executed from a list of ~600 CIDR ranges of possible Azure IP addresses, from AzureCloud.eastus2 in my case.
The function app fails to access the needed web service in these cases and fails.
AWS security groups only allow 60 inbound rules so I couldn't set 600 even if I wanted.
Is there a better approach to opening an AWS instance's port to an Azure function app?
Just like using a NAT Gateway with AWS Lambda functions to provide a static outbound IP, it appears you can do the same thing in Azure Functions with a Virtual network NAT gateway. That appears to be your best option.
Related
What is the difference between Private Link and VPC endpoint? As per the documentation it seems like VPC endpoint is a gateway to access AWS services without exposing the data to internet. But the definition about AWS private link also looks similar.
Reference Link:
https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html
Does Private Link is the superset of VPC endpoint?
It would be really helpful if anyone provides the difference between these two with examples!
Thanks in Advance!
AWS defines them as:
VPC endpoint — The entry point in your VPC that enables you to connect privately to a service.
AWS PrivateLink — A technology that provides private connectivity between VPCs and services.
So PrivateLink is technology allowing you to privately (without Internet) access services in VPCs. These services can be your own, or provided by AWS.
Let's say that you've developed some application and you are hosting it in your VPC. You would like to enable access to this application to services in other VPCs and other AWS users/accounts. But you don't want to setup any VPC peering nor use Internet for that. This is where PrivateLink can be used. Using PrivateLink you can create your own VPC endpoint services which will enable other services to use your application.
In the above scenario, VPC interface endpoint is a resource that users of your application would have to create in their VPCs to connect to your application. This is same as when you create VPC interface endpoint to access AWS provided services privately (no Internet), such as Lambda, KMS or SMS.
There are also Gateway VPC endpoints which is older technology, replaced by PrivateLink. Gateways can only be used to access S3 and DynamoDB, nothing else.
To sum up, PrivateLink is general technology which can be used by you or AWS to allow private access to internal services. VPC interface endpoint is a resource that the users of such VPC services create in their own VPCs to interact with them.
Suppose there is a website xyz.com that I am hosting in a bunch of Ec2 instances, exposed to the outside world thru a Network load balancer.
Now, a client who has his/her own AWS account, wants to access this xyz.com from an Ec2 running in their aws account.
One approach is to go thru the Internet.
However the client wants to avoid the internet route.
He/she wants to use the AWS backbone to reach xyz.com.
The technology that enables that, is AWS Private link.
(note that if you search for Private Link in the AWS services, there will be none.
You will get "End point services" as the closest hit)
So, this is how to route traffic through the AWS backbone:
I, the owner of xyz.com, will create a VPC End Point Service (NOTE the keyword Service here)
The VPC End point service will point to my Network load balancer.
I will then give my VPC End point service name to the client.
The client will create a VPC End Point (NOTE.. this is different from #1).
While creating it, the client will specify the VPC End Point Service name (from #1) that he got from me.
I can choose to be prompted to accept the connection from the client to my VPC End point service.
As soon as I accept it, then the client can reach xyz.com from his/her EC2 instance.
There is no Internet, no direct connect or VPN.. this simply works; and its secure.
And which technology enabled it.. AWS Private link !!!
PRIVATE LINK IS THE ONLY TECHNOLOGY THAT ALLOWS 2 VPCS TO CONNECT THAT HAVE OVERLAPPING CIDR RANGES.
A useful way in understanding differences is in how they technically connect private resources to public services.
Gateway Endpoints route traffic by adding prefix lists within a VPC route table which targets the Gateway endpoint. It is a logical gateway object similar to a Internet Gateway.
In contrast, an Interface Endpoint uses Privatelink to inject into a VPC at the subnet level, via an Elastic Network Interface (ENI), giving network interface functionality, and therefore, DNS and private IP addressing as a means to connect to AWS public services, rather than simply being routed to it.
The differences in connections offer differing advantages and disadvantages (availability, resiliency, access, scalability, and etc), which then dictates how best to connect private resources to public services.
Privatelink is simply a very much abstracted technology to allow a more simplified connection by using DNS. The following AWS re:Invent offers a great overview of Privatelink: https://www.youtube.com/watch?v=abOFqytVqBU
As you correctly mentioned in the question that both VPC endpoint and AWS private link do not expose to internet. On AWS console under VPC, there is a clear option available to create an endpoint. But there is no option/label to create AWS private link. Actually, there is one more option/label called endpoint service. Creating endpoint service is one way to establish AWS private link. At one side of this AWS private link is your endpoint service and at the other side is your endpoint itself. And interestingly we create both these sides in two different VPCs. In other words, you are connecting two VPCs with this private link (instead of using internet or VPC peering).
understand like,
VPC1 got endpoint service ----> private link -----> VPC2 got endpoint
Here endpoint service side is service provider while endpoint is service consumer. So when you have some service (may be some application or s/w) that you think other VPC endpoints can consume you create endpoint service at your end and consumers will create endpoints at there end. When consumers create endpoints at their end they have to give/select your service name and thus private link will be established with your service.
Ultimately you can have multiple consumers of your service just like one to many relationship.
Is it possible to access GCP PaaS (App Engine , Cloud Function, Cloud Run) internally (throught VPC)
I see in this doc : https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
"Serverless VPC Access only allows requests to be initiated by the serverless environment. Requests initiated by a VM must use the external address of your serverless service—see Private Google Access for more information."
But searching for something like "Serverless VPC Access allows in/out requests"
You have 2 ways: in and out
Request TO serverless APP
You can use ingress control with Cloud Functions and Cloud Run services. You can say: I want that only connections from my VPC (or VPC SC perimeter) access to my serverless APP. With App Engine, you have firewall rules but doesn't work with private IP.
Request FROM serverless APP
Here you want to reach private resource exposed only on your VPC with a private IP. And with Cloud Run, Cloud Functions and App Engine, you can plug a serverless VPC connector to achieve this.
EDIT 1
With your appliance firewall deployed on Google Cloud, App Engine isn't the perfect product for this. Indeed, with App Engine you can't control the ingress traffic, and you always accept the traffic from the internet, even if you have a stuff (here your appliance) already on Google Cloud Network with a private IP.
The solution here (to test, depends on the appliance capacity) is to use Cloud NAT and to route all the traffic of the subnet on which the appliance is deployed, and to use a reserved static IP.
Then, on App Engine, you can set a firewall rule to accept only traffic from this reserved static IP.
The latency will increase with all these layers...
I have an instance and s3 bucket in AWS (which I'm limiting to a range of IPs). I'm wanting to create a VPN and be able to authenticate myself while trying to log into that VPN to get to that instance.
To simplify, I'm trying to set up a dev environment for my site. I'm wanting to make sure I can limit access to that instance. I'm wanting to use a service to authenticate anybody wanting to get to that instance. Is there a way to do all of this in AWS?
Have you looked at AWS Client VPN:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html
This allows you to create a managed VPN server in your VPC which you can connect to using any OpenVPN client. You could then allow traffic from this vpn to your instance using security group rules.
Alternatively you can achieve the same effect using OpenVPN on an EC2 server, available from the marketplace:
https://aws.amazon.com/marketplace/pp/B00MI40CAE/ref=mkt_wir_openvpn_byol
Requires a bit more set up but works just fine, perfect if AWS Client VPN isn't available in your region yet.
Both these approaches ensure that your EC2 instance remains in a private subnet and is not accessible directly from the internet. Also, the OpenVPN client for mac works just fine.
I have provisioned an AWS API Gateway and created a Lambda function to connect to an external REST API. The API Gateway & Lambda is not in a VPC so the egress IP address is random. The challenge I have is the external REST API is behind a firewall, which requires the IP address or subnet of the Lambda to be whitelisted.
I have looked at the AWS IP Address page (below), however there is no explicit mention of either API Gateway or Lambda.
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#filter-json-file
Has anyone come across this before & found a resolution to it. For the purposes of this solution I cannot put the API Gateway & Lambdas in a VPC.
Any help would be greatly appreciated!
API Gateway seems to be irrelevant to this discussion. If I understand your question, you're trying to make API requests from a Lambda function to a remote API server and you want those requests to originate from a known IP address so that you can whitelist that IP at the remote server.
First thing I would say is don't use IP whitelisting; use authenticated API requests instead.
If that's not possible then use VPC with an Internet Gateway (IGW). Create a NAT and an Elastic IP, launch the Lambda into a private subnet of that VPC, and route the subnet's non-local traffic to the NAT. Then whitelist the NAT's Elastic IP on the remote API server. Examples here and here.
I know that you said you "cannot put [...] Lambdas in a VPC", but if you don't then you have no control over the originating IP address.
It is frustrating that the only way to ensure a Lambda function uses a static ip without a hack is to put the Lambda inside a VPC, create a NAT with an Elastic IP, as many other answers nicely explain.
However, NATs cost around $40 per month in regions that I am familiar with, even with minimal traffic. This may be cost prohibitive for certain use cases, such as if you need multiple dev/test/staging environments.
One possible workaround (which should be used with caution) is to create a micro EC2 instance with an elastic IP (which gives the static IP address), then your choice of proxy/routing so you can make HTTP calls by tunnelling from the Lambda function through the EC2 instance. (e.g. SSH from Lambda function into EC2 instance then CURL from EC2 to the endpoint which is protected by an allowlist)
This is a few extra hoops to jump through and could introduce security vulnerabilities which should be mitigated (e.g. Beware storing SSH keys or passwords inside a Lambda function, ensure Security Groups are tight) but I wanted to post this as a possible workaround for any devs who need a cost effective workaround for a requirement to connect to an endpoint which enforces allowlist rules.
I want to understand, why there is a need to configure vpc to access only redis and not other services( like dynamo db ) from lambda function
Redis (either on an EC2 instance, or via ElastiCache) runs inside your VPC. So does EC2 and RDS and Redshift and several other AWS services. If you want to access one of those services that run inside your VPC then you would have to configure your Lambda function to have VPC access.
Other services, like DynamoDB, that don't exist inside the VPC obviously don't require VPC access.
As for the reason why some services are inside the VPC and not others, the services that exist in the VPC tend to be things where you specify a specific server configuration (server size, number of servers, etc) and you run those servers inside your virtual network (VPC). On the other hand there are some services like DynamoDB where the actual back-end server size and quantity are not exposed to you. Amazon completely manages these servers and you don't even get to know how many of them there are. These servers do not exist inside your virtual network, and you only access these services via the AWS API.