I have setup one project in AWS Cloud Service. There I was using EC2 Instance, AMI, Elastic IP, Internet Gate Way, NACL, Route Table, Security Group, custom VPC, Private and Public Sub net, Elastic Load Balance, Auto scaling, Launch Configuration, KMS-key, Lambda, RDS Aurora Instance, S3 Bucket, Simple Email Service, Simple Queue Service, Simple Notification Service, Cloud watch logs. Now My client asking to migrate all services from existing AWS account to New AWS Account.
How to achieve this?
Just contact AWS support. If you are doing a migration not a copy, then the account can be changed with no interruption of service directly by AWS. Open a case in the AWS support center. See docs
If you need a copy of those services into a different account, is a more complicated task as your will have to create different physical resources. For that I would recommend using CloudFormation.
Related
I have a Python application deployed on EKS (Elastic Kubernetes Service). This application saves large files inside an S3 bucket using the AWS SDK for Python (boto3). Both the EKS cluster and the S3 bucket are in the same region.
My question is, how is communication between the two services (EKS and S3) handled by default?
Do both services communicate directly and internally through the Amazon network, or do they communicate externally via the Internet?
If they communicate via the internet, is there a step by step guide on how to establish a direct internal connection between both services?
how is communication between the two services (EKS and S3) handled by default?
By default the network topology of your EKS offers route to the public AWS S3 endpoints.
Do both services communicate directly and internally through the Amazon network, or do they communicate externally via the Internet?
Your cluster needs to have network access to the said public AWS S3 endpoints. Example, worker nodes running in public subnet or the use of NAT gateway in private subnet.
...is there a step by step guide on how to establish a direct internal connection between both services?
You create VPC endpoints for S3 in the VPC that your EKS runs to ensure network communication with S3 stay within AWS network. VPC endpoints for S3 support both interface and gateway type. Try this article to learn about the basic of S3 endpoints, you can use the same method to create endpoints in the VPC where your EKS runs. Request to S3 from your pods will then use the endpoint to reach out to S3 within AWS network.
You can add S3 access to your EKS node IAM role, this link shows you how to add ECR registry access to EKS node IAM role, but it is the same for S3.
The other way is to make environment variables available in your container, see this link, though I would recommend the first way.
I'm working with AWS and need some support please.
My team provisioned Direct Connect and we can now enjoy private connectivity from our corporate network to VPC on AWS.
Management is asking if it's possible that aws cli commands are executed through Direct Connect and not through the public internet. Indeed, we have a lot of scripts with a lot of commands like aws ec2 describe-instances and so on. I guess these calls the public REST API of EC2 service that AWS exposes.
They're asking if it's possible that these calls do not go through the public internet.
I've seen VPC endpoints? Are they the solution?
See How can I access my Amazon S3 bucket over Direct Connect? for how to do this with S3.
Basically:
After BGP is up and established, the Direct Connect router advertises all global public IP prefixes, including Amazon S3 prefixes. Traffic heading to Amazon S3 is routed through the Direct Connect public virtual interface. The public virtual interface is routed through a private network connection between AWS and your data center or corporate network.
You can extend this to other Amazon services, per the AWS Direct Connect FAQs:
All AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Simple Storage Service (S3), and Amazon DynamoDB can be used with Direct Connect.
Refer to #jarmod's answer below for the answer to the question but read on for why I think this sounds like an XY problem.
There is no reason at all why management should be concerned.
Third-party auditors assess the security and compliance of AWS services as part of multiple AWS compliance programs. Using the AWS CLI to access a service does not alter that service's compliance - AWS has compliance programs which pretty much cover every IT compliance framework out there globally.
Compliance aside, the AWS CLI does not store any customer data (there should be no data protection concerns) & transmits data securely (unless you manually override this).
The user guide highlights this:
The AWS CLI does not itself store any customer data other than the credentials it needs to interact with the AWS services on the user's behalf.
By default, all data transmitted from the client computer running the AWS CLI and AWS service endpoints is encrypted by sending everything through a HTTPS/TLS connection.
You don't need to do anything to enable the use of HTTPS/TLS. It is always enabled unless you explicitly disable it for an individual command by using the --no-verify-ssl command line option.
As if that's not enough, you can also add increased security when communicating with AWS services by enforcing a minimum version of TLS 1.2 to be used by the CLI.
There should be targeting of much much bigger attack vectors, like:
The physical accessibility of the device storing the credentials
Permanent access tokens vs. temporary credentials
IAM policies associated with the credentials
The AWS CLI is secure.
I have my RDS instances on one AWS account and I have set up my application on Kubernetes Cluster on another account. I need the application to talk to RDS instances on another account. I chose VPC Endpoint(Private Link) to achieve the same, so that the RDS data is safe and secure. Is it possible to have a Private Link established between multiple AWS accounts. Both the accounts are under the same AWS organization.
Is it possible to have a Private Link established between multiple AWS accounts.
Yes. The AWS documentation explains that a service consumer can be a different account:
Grant permissions to specific service consumers (AWS accounts, IAM users, and IAM roles) to create a connection to your endpoint service.
Setting up permissions for other accounts to your Private Link service is explained in:
Adding and removing permissions for your endpoint service
I think the better architecture would be to use VPC Peering to connect the VPC with the database to the VPC with the Kubernetes cluster.
The data remains "safe and secure" because it stays within the two VPCs.
No Network Load Balancer would be required.
I want to execute AWS CLI commands of RDS not via the internet, but via a VPC network for mainly creating manual snapshots of RDS.
However, VPC endpoints support only RDS Data API according to the following document:
VPC endpoints - Amazon Virtual Private Cloud
Why? I need to execute a command within closed network for security rules.
Just to reiterate you can still connect to your RDS database through the normal private network using whichever library you choose to perform any DDL, DML, DCL and TCL commands. Although in your case you want to create a snapshot which is via the service endpoint.
VPC endpoints are to connect to the service APIs that power AWS (think the interactions you perform in the console, SDK or CLI), at the moment this means for RDS to create, modify or delete resources you need to use the API over the public internet (using HTTPS for encrypted traffic).
VPC endpoints are added over time, just because a specific API is not there now does not mean it will never be there. There is an integration that has to be carried out by the team of that AWS service to allow VPC endpoints to work.
Google failed me again or may be I wasnt too clear in my question.
Is there an easy way or rather how do we determine what services are VPC bound and what services are non-vpc ?
For example - EC2, RDS require a VPC setup
Lambda, S3 are publicly available services and doesn't need a VPC setup.
The basic services that require an Amazon VPC are all related to Amazon EC2 instances, such as:
Amazon RDS
Amazon EMR
Amazon Redshift
Amazon Elasticsearch
AWS Elastic Beanstalk
etc
These resources run "on top" of Amazon EC2 and therefore connect to a VPC.
There are also other services that use a VPC, but you would only use them if you are using some of the above services, such as:
Elastic Load Balancer
NAT Gateway
So, if you wish to run "completely non-vpc", then avoid services that are "deployed". It means you would use AWS Lambda for compute, probably DynamoDB for database, Amazon S3 for object storage, etc. This is otherwise referred to as going "serverless".