Calling a Lambda function from an EC2 without internet access - amazon-web-services

We have an EC2 instance which for security reasons has no Internet access. But at the same time, the code running on that server needs to call some Lambda functions. It seems to me these two requirements are contradictory since without Internet access, the code can not call Lambda functions.
Does anyone have any suggestion on what are my options without sacrificing the security aspect of the project?

You won't be able to reach the AWS API's generally without internet access. Two exceptions are S3 and DynamoDB where you can create VPC endpoints and keep it completely on a private network. Some services can also be exposed through PrivateLink, but Lambda is not yet one of them.
You can learn more about those here: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Depending on your security requirements, you might be able to use a NAT Gateway (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html) or an Egress-Only Internet Gateway (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/egress-only-internet-gateway.html)
Those would provide access from the instance to the internet, but without the reverse being true. In many cases, this will provide enough security.
Otherwise, you will have to wait for PrivateLink to support Lambda. You can see more on how to work with PrivateLink here: https://aws.amazon.com/blogs/aws/new-aws-privatelink-endpoints-kinesis-ec2-systems-manager-and-elb-apis-in-your-vpc/

Related

Does AWS Neptune need internet access?

I was starting a neptune database from this base stack
https://s3.amazonaws.com/aws-neptune-customer-samples/v2/cloudformation-templates/neptune-base-stack.json
However now i am wondering why a NAT Gateway and also an Internet Gateway are started in this stack? are they required for updates within Neptune? This seems like a huge security risk.
On top of that these gateways are not cheap.
I would be happy for an explanation on this
The answer is no, it's not required, AWS just sneaked some unecessary costly ressources into the template..
Anyways if you want to use the updated template without NAT and IG GWs use this one that i just created https://neptune-stack-custom.s3.eu-central-1.amazonaws.com/base.json

What's the correct way to execute multiple Google Cloud Functions with different outgoing IPs?

I intend to use Google Cloud Functions to access an API. My goal is to use several functions, each with a different IP address. I would be distributing processing across several functions simultaneously that would then each interact with the target API. As I understand it, there's the possibility that the execution of two separate functions could be take place on the same machine - meaning requests would originate from the IP. In order to respect the rate limits, I need to know how many requests will be going through each IP address and therefore need to ensure that each function is executing with a separate IP.
I'm new to Google Cloud Functions, but I've made some progress. Currently, I have a function function-1. This function is using connector-1 and passing all egress traffic through my default VPC network. I followed the guide provided by Google Cloud for associating a static IP with my function. As a result, I now have router-1 which is connected with my NAT gateway nat-1. Finally, nat-1 has a static IP associated with it.
At this point, any execution of function-1 is using the static IP as expected. However, I'm still trying to understand the proper way of structuring this. I have a few questions on the matter:
Do I have to duplicate every link in the chain for each function that requires its own IP address?
Am I able to re-use some of these items? For example, perhaps all functions can use the same VPC network?
Is there a better way to structure things to meet my requirements assuming I needed 10 or 20 functions using different IPs?
The answers:
I'm not sure what you mean with "duplicate every link in the chain", but if you want to enforce each CF to have a static IP address you will have to follow the steps you shared.
Yes, you could re-use the VPC network and attach a new serverless VPC connector. Even in the same region.
If you want to force a different static IP for each CF, no, you need to follow these steps.
As a tip, you can use gcloud compute networks vpc-access connectors create to kind of automatize the connectors creation. It may be useful if you have to create many because it's faster than using the Console.
If this limitation does not suit your scenario you should wonder whether this is the appropriate product for you.

AWS Lambda and ECS Tasks - what is the best way to orchestrate?

I'm creating my application with the as much serverless as possible premise.
Long story short, 2 services cannot be implemented as lambda functions, hence I bet on ECS tasks with EC2 autoscaling groups, due to GPU requirements, etc.
After doing my homework on the Lambda + VPC resources lesson I was shocked there's no easy and pleasant way to expose VPC services extension of AWS services. So the official approach stands for incorporating a lambda function into a VPC plus establishing a NAT gateway/instance or VPC endpoints in order to reach the internet and AWS services. Moreover, I can read this is not recommended and should be treated as the final solution. It slows down lambda and increases cold starts.
Generally, I need access to the internet and reach other AWS services from the lambda, which must make requests to ECS tasks. Those tasks are crucial contributors to my flow I'd like them to be easily callable from lambda functions. I'm not sure if VPC lambdas would make sense if I need to pay for NAT, which is comparatively expensive. Maybe I missed something.
Is it possible to avoid incorporating lambdas into VPC and still be able to call ECS services? If not, what is the best way to cuts costs related to NAT?
I'd appreciate any form of help.

AWS API Gateway - access internally

Although this question may seem like something you've seen in the past - please ensure to read it before assuming - as this is related to a different type of internal access.
We currently have a few API Gateways, serving different needs. These Gateways are public (regional) and accessed via public consumers.
On an ah-hoc basis, we do back-end releases, which entail removing the Gateway for external (public) access. The process is then, to make all deployments needed and then test the Gateway once public again.
We go "internal" but adding the current load balance(s) into a group that's only accessible via internal IP range.
I'd like to know if there would be a way whereby we could access the same Gateway internally, whilst we are offline, to help speed up testing once back to external.
One of the ways can be to use a WAF. You can automate the process to change the rule to be open only for you or to the world using. IP Match Condition rule can be useful for whitelisting.
https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-api-gateway-adds-support-for-aws-waf/
You can have Ip based access for your API gateway.
There's a blog I found, that could be useful to you.
https://lobster1234.github.io/2018/04/14/amazon-api-gateway-ip-whitelisting/

amazon load balancer for lambda

I am new to use aws.
Normally, I use load balance like bottom with 2 servers.
For L4 Load balancing, there are more than 2 servers
But ALB - Lambda is not I think
I am curious about ALB - Lambda relationship
Is it 1:1? not like L4 switch? or VPCs stand for the server?
And I want to know benefit to use ALB for lambda.
Since Lambda is a short running piece of code (FAAS) - Function as a service. The service executes quickly in milli-seconds and dies out. You need to change the way you are thinking about using Lambda, as this doesn't compare to a VPS(virtual private server) or EC2 instance. You have to go with a different approach, called as serverless computing.
Instead, you can have the API Gateway sit on top of the Lambda functions and you can invoke these API's to execute your code. Each lambda function must do only 1 single task and nothing more.
As a matter of fact, the longer a lambda function runs, the costlier it becomes in terms of billing. So having a short running function is the way to keep your bills in check.
If you want to use lambda - try this serverless-stack tutorial - Ref: https://serverless-stack.com/.
Lambda does have outage issues - and the way to handle this is using Route53 Service as a load balancer.
Another good Reference:
Ref: https://serverless.com/
You can invoke lambda via api gateway and alb also. The difference lies in the cost. API gateway is way costlier