So we are starting to explore AWS Glue service as an ETL service. The largest outstanding question in our org. is "How will we be able to develop these scripts without console access?" Enter development endpoints. I fully understand how we can use the repl/zeppelin notebooks to develop our scripts and test them in our development VPC on AWS.
My only question is, does AWS allow you to keep those development endpoints inside of a private subnet? It seems that the endpoint has a public DNS. We have a policy of not allowing an instance to have a public DNS endpoint. Are we screwed here? Any help is appreciated! Thanks!
Related
I have spent the last two weeks working to setup a API Gateway private integration in AWS. This article from AWS AWS Private Integration Tutorial very, very briefly touches on what is required to actually set this up. I have everything working finally but there is one lingering security detail that I just haven't been able to figure out and quite honestly there is some concept of this setup that I didn't fully grasp if I ended up at this point. The fix is probably simple.
My AWS Architecture
I have a Spring Boot app running in Elastic Beanstalk on a private subnet in my VPC. I then created a public facing (external) Network Load Balancer (NLB). This NLB has a target group set on it that points to the EC2 instance running my Spring Boot app. Next I setup a VPC Link that points to my NLB in AWS API Gateway and proceeded to create some endpoints that point to my Spring Boot app. Up to this point everything is really solid, I proceeded to test the endpoints through API Gateway and all is good. I then took it one step further and secured my endpoints with an API key, tested it and working great!
Problem
The big problem I suddenly realized was that the DNS(A record) of my NLB can still be directly accessed and bypasses the AWS API Gateway entirely.
Question
How can I force all traffic through the AWS API Gateway and prevent direct access to my API through the NLB DNS?
I'm working with AWS and need some support please.
My team provisioned Direct Connect and we can now enjoy private connectivity from our corporate network to VPC on AWS.
Management is asking if it's possible that aws cli commands are executed through Direct Connect and not through the public internet. Indeed, we have a lot of scripts with a lot of commands like aws ec2 describe-instances and so on. I guess these calls the public REST API of EC2 service that AWS exposes.
They're asking if it's possible that these calls do not go through the public internet.
I've seen VPC endpoints? Are they the solution?
See How can I access my Amazon S3 bucket over Direct Connect? for how to do this with S3.
Basically:
After BGP is up and established, the Direct Connect router advertises all global public IP prefixes, including Amazon S3 prefixes. Traffic heading to Amazon S3 is routed through the Direct Connect public virtual interface. The public virtual interface is routed through a private network connection between AWS and your data center or corporate network.
You can extend this to other Amazon services, per the AWS Direct Connect FAQs:
All AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Simple Storage Service (S3), and Amazon DynamoDB can be used with Direct Connect.
Refer to #jarmod's answer below for the answer to the question but read on for why I think this sounds like an XY problem.
There is no reason at all why management should be concerned.
Third-party auditors assess the security and compliance of AWS services as part of multiple AWS compliance programs. Using the AWS CLI to access a service does not alter that service's compliance - AWS has compliance programs which pretty much cover every IT compliance framework out there globally.
Compliance aside, the AWS CLI does not store any customer data (there should be no data protection concerns) & transmits data securely (unless you manually override this).
The user guide highlights this:
The AWS CLI does not itself store any customer data other than the credentials it needs to interact with the AWS services on the user's behalf.
By default, all data transmitted from the client computer running the AWS CLI and AWS service endpoints is encrypted by sending everything through a HTTPS/TLS connection.
You don't need to do anything to enable the use of HTTPS/TLS. It is always enabled unless you explicitly disable it for an individual command by using the --no-verify-ssl command line option.
As if that's not enough, you can also add increased security when communicating with AWS services by enforcing a minimum version of TLS 1.2 to be used by the CLI.
There should be targeting of much much bigger attack vectors, like:
The physical accessibility of the device storing the credentials
Permanent access tokens vs. temporary credentials
IAM policies associated with the credentials
The AWS CLI is secure.
I recently realised my NEXT JS project I deployed on AWS Amplify uses Lambda but I need to deploy it on EC2. Is this possible at all?
I'm new to this whole thing so excuse the ignorance but for certain reasons I need to use EC2?
Is that possible?
Thanks
AWS EC2 is a service that provides all the compute, storage, and networking needs you may have for any application you want to develop. From its site:
Amazon EC2 offers the broadest and deepest compute platform with a choice of processor, storage, networking, operating system, and purchase model.
Source
Basically, you can create any number of virtual machines, connected among themselves and to the Internet however you like; and use any data persistence strategy.
There are many things to unpack when using EC2, but to start, I would suggest that you learn how to set up an EC2 instance using the default VPC that comes with your account. Be sure to configure the instance to have a public IP so you can access it through the Internet. Once inside, you can deploy your application however you like and access it through your public IP.
Before moving on, trying to decide why you need your app to run on EC2, Lambda is a SaaS (Software as a Service) product, meaning that all of the service provider's infrastructures are managed. On the other hand, EC2 is an IaaS product (Infrastructure as a Service) which means that you have to handle most of the infrastructure.
We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.
Does anyone has any idea as to where the endpoint address of AWS EC2 IaaS service is stored in the WSO2 Private PaaS install package?
The question seems strange, normally we do not set this information because the world is using the "Same" AWS EC2 service, and the WSO2 Private PaaS knows endpoint address and can do it automatically for us.
Yet in order to conform with the Chinese government's internet censure, Amazon has to logically totally segregate the AWS EC2 service provided within China mainland.
So, you are actually using a totally "different" AWS EC2 service in China; you had to replace the "global" service endpoint with that of the china.
please advise
thanks
You can try to set the following system property: aws-ec2.endpoint
https://jclouds.apache.org/guides/aws-ec2/