I was starting a neptune database from this base stack
https://s3.amazonaws.com/aws-neptune-customer-samples/v2/cloudformation-templates/neptune-base-stack.json
However now i am wondering why a NAT Gateway and also an Internet Gateway are started in this stack? are they required for updates within Neptune? This seems like a huge security risk.
On top of that these gateways are not cheap.
I would be happy for an explanation on this
The answer is no, it's not required, AWS just sneaked some unecessary costly ressources into the template..
Anyways if you want to use the updated template without NAT and IG GWs use this one that i just created https://neptune-stack-custom.s3.eu-central-1.amazonaws.com/base.json
Related
First, I saw in the billing section how much I pay for NAT Gateway, I need to understand exactly what I'm paying for, I suspect that the git checkout (we use GitHub) from our instances takes the most of the cost, but I need some way to prove / see exactly the traffic I pay for, is it possible? If so, how?
NAT gateway does not publish any information about how much data is processed by source/destination. You can deduce it by searching VPC flow logs. This documentation may be useful.
I would like to let my AWS EKS node to communicate with AWS RDS. Both of them are in the same subscription and region so no need to implement any sci-fi architectures. Just a simple one would be enough.
I started to investigate and I found a couple of stackoverflow threads.
This is the first idea where the Security Groups for Pods is "implemented". This is not my case. I'm happy to share the RDS with all the whole nodes. Am I wrong?
This is the second idea (actually in the same thread) where they suggest to put all the different resources (RDS and EKS) in the same VPC (shared?). Is it a good idea?
And finally here the VPC Peering Connection is suggested as a good solution. Is it really a good solution? I can see here the announcement which stands that: "all data transfer over a VPC Peering connection that stays within an Availability Zone (AZ) is now free". This is good, but looks like an enterprise solution for a simple problem.
Can you help me here in choosing a good solution which can properly fit my scenario? Can I set a proper IAM/Roles instead?
Although this question may seem like something you've seen in the past - please ensure to read it before assuming - as this is related to a different type of internal access.
We currently have a few API Gateways, serving different needs. These Gateways are public (regional) and accessed via public consumers.
On an ah-hoc basis, we do back-end releases, which entail removing the Gateway for external (public) access. The process is then, to make all deployments needed and then test the Gateway once public again.
We go "internal" but adding the current load balance(s) into a group that's only accessible via internal IP range.
I'd like to know if there would be a way whereby we could access the same Gateway internally, whilst we are offline, to help speed up testing once back to external.
One of the ways can be to use a WAF. You can automate the process to change the rule to be open only for you or to the world using. IP Match Condition rule can be useful for whitelisting.
https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-api-gateway-adds-support-for-aws-waf/
You can have Ip based access for your API gateway.
There's a blog I found, that could be useful to you.
https://lobster1234.github.io/2018/04/14/amazon-api-gateway-ip-whitelisting/
Currently, I have built an application on EC2 instances in multiple regions. The problems are when one instance per region need to patch/maintain, and we need more effort to handle if something fails.
I decide to use Lambda#Edge instead of EC2 and question is:
Lambda#Edge is better than these EC2 instances?
Need to make sure that Lambda#Edge would be reachable with the same latency or better than EC2. Have any official docs to prove this?
Thanks
If the issue you're facing is one of patching and maintenance of instances then yes, Lambda or Lambda#Edge will absolutely remove that issue.
If the issue is latency and you want to keep your instances you could create an Amazon Cloudfront Distribution that would go in front of your instances and serve cached content to your users - that might be the easiest way to start out.
Lambda#Edge would have the same latency as Cloudfront. Lambda functions that are deployed to CloudFront edge locations have a couple of limitations.
We have an EC2 instance which for security reasons has no Internet access. But at the same time, the code running on that server needs to call some Lambda functions. It seems to me these two requirements are contradictory since without Internet access, the code can not call Lambda functions.
Does anyone have any suggestion on what are my options without sacrificing the security aspect of the project?
You won't be able to reach the AWS API's generally without internet access. Two exceptions are S3 and DynamoDB where you can create VPC endpoints and keep it completely on a private network. Some services can also be exposed through PrivateLink, but Lambda is not yet one of them.
You can learn more about those here: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Depending on your security requirements, you might be able to use a NAT Gateway (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html) or an Egress-Only Internet Gateway (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/egress-only-internet-gateway.html)
Those would provide access from the instance to the internet, but without the reverse being true. In many cases, this will provide enough security.
Otherwise, you will have to wait for PrivateLink to support Lambda. You can see more on how to work with PrivateLink here: https://aws.amazon.com/blogs/aws/new-aws-privatelink-endpoints-kinesis-ec2-systems-manager-and-elb-apis-in-your-vpc/