I was talking to one of my friends on talk on API access. So let us say I have given one AWS account read-only access to my resources, say ec2. And that account tries to scan all the metadata for my ec2 instances. Per my friend, this API call belongs control plane and connects to the AWS API ec2 endpoint over the internet. As per him, this call can not be blocked by any number of VPC controls like NACLs/Security Group, etc. As per him, any data plane calls only goes to VPC. I was kind of agree but still not very convinced that scanning ec2 instances call like listing all instances can not be blocked ..say I have given read-only permission and still wants to block that account ...so is this true that VPC controls do not project that call further. Please help me to understand better ...in case I am in my corporate network and my consuming account, which wants to scan my ec2 instances if SAAS provider.
The Amazon EC2 service is responsible for creating and managing Amazon EC2 instance, VPCs, networking, etc. The API endpoint for the EC2 service reside on the Internet. Permission to make API calls is controlled by AWS Identity and Access Management (IAM).
This is totally separate to the ability to connect to an Amazon EC2 instance. Any such connections would go via a virtualized network VPC.
For example, imagine an Amazon EC2 instance that is turned off (that is, in a Stopped state). There are no actual resources assigned to a Stopped instance -- it is just some metadata sitting in a database. It would not be possible to 'connect' with this instance because it does not exist. However, it would be possible to connect to the AWS EC2 service and issue a command to Start the instance. This API call is made via the Internet and does not require any connectivity to the VPC.
Your wording that "any data plane calls only goes to VPC" is not correct -- the calls go to the EC2 service and do not involve the VPC. The VPC is purely a network configuration that determines how resources can communicate with each other.
Related
When I go through the documents, using session manager we can connect instance in private subnet without having bastion host itself [direct port forwarding from local to private ec2].
But in RDS case, even though we are making connection using session manager we need a EC2 instance in between local and private RDS.
Could you anyone explain me why it is like that? please share some document that explains that as well.
AWS Systems Manager Session Manager allows you to connect to an instance in a Private Subnet because the instance is actually running an 'SSM Agent'. This piece of code creates an outbound connection to the AWS Systems Manager service.
Then, when you request a connection to the instance, your computer connects to the AWS Systems Manager service, which forwards the request to the agent on the instance. The AWS Systems Manager service is effectively acting as a Bastion for your connection.
AWS Systems Manager Session Manager cannot provide a connection to an Amazon RDS server because there is no ability to 'login' to an Amazon RDS server. Given that your RDS server is running in a Private Subnet, it is therefore necessary to port-forward via an EC2 instance in the same VPC as the RDS server. This can be done via a traditional Bastion EC2 instance in a Public Subnet, or via an EC2 instance in a Private Subnet by taking advantage of the Port Forwarding capabilities of AWS Systems Manager Session Manager.
I have got answered the same question in the AWS repost by #Uwe K. Please refer below.
SSM allows many more functions - and changes! - to an instance then just connecting to it. Having full SSM functionality on an RDS instance thus would undermine the Shared Responsibility Model we use for RDS (you could also say: it would violate the "Black Box" principle of RDS). Therefore, you need an intermediary instance that forwards the TCP Port exposed by RDS to your local machine.
Further reading:
The RDS-specific Shared Responsibility Model is explained here https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html
a general overview of the Shared responsibility model https://aws.amazon.com/compliance/shared-responsibility-model/
In order to connect to any EC2 instance with AWS systems manager, the SSM agent must be installed on that machine and the appropriate permissions need to be set up for the instance.
At the moment, AWS does not support this to RDS directly. In order for them to support such a setup, they'd probably need to install the agent on all RDS instances which generates quite some overhead and who knows what else the complexities of such a setup would have.
So at the present moment the most effective way to connect is setting up a tunnel via an EC2 instance.
I can't figure out how to make them talk using API calls. Previously I used API Gateways which would trigger lambdas and that lambdas would interact with dynamodb and other services and send me back json response. Now I want to shift to EC2 instances and totally skip API gateway usage. And let a server I run in ec2 do the computation for me. Do I need to deploy a web service(DJango RESTFUL) in EC2 instance and then use it to call in my frontend? If yes I need little guidance how
And Suppose I want to access s3 storage from my DJango restufl in EC2. Can I do it without having to enter the access key and ID and use roles instead just like how I would access s3 from the ec2 instance without access key and ID. Traditionally with SDK we have to use access key and secret keys to even get authorized to use services in SDK so I was wondering if there was a way to get over this since the program will be running in EC2 instance itself. One really inefficient way will be to run a batch command that makes the EC2 interact with services I need without SDK and with roles instead but It is really inefficient and too much work as far as I can see.
As you are familiar with API Gateway, you can use the same to connect to your EC2 instance, its private integration, with the use of VPC Links.
You can create an API Gateway API with private integration to provide your customers access to HTTP/HTTPS resources within your Amazon Virtual Private Cloud (Amazon VPC). Such VPC resources are HTTP/HTTPS endpoints on an EC2 instance behind a Network Load Balancer in the VPC.
You can go though this document for step by step integration.
If you do not want to use API gateway any more, then you can simply use Route53 to route traffic to EC2 instance, all you need is the IP address of the EC2 instance and a hosted zone created using Route53.
Here is a tutorial for your reference.
currently working with two environments/account.
Dev and staging
We are paling to spin up a new instance to install Jenkins for CI/CD in dev environment.
We are also wondering if we can use the same instance which is in dev as a CI/CD for staging account as well.
How will access work?
How can the CI/CD instance access the instances in stating for CI/CD?
Do we need to set up a cross-account role for this which allowed dev CI/CD to access the stating instances?
or
the private key is enough to have access to EC2 irrespective of account?
You can definitely enable this. Take a look at VPC peering.
This features enables 2 VPCs whether different account or different region, to connect to each other as there networks become connected via a tunnel between.
When you implement this the following factors are important:
No cross over of CIDR ranges within VPCs
The VPC peering connection must be added to the route table(s) in both VPCs allowing them to know how to connect to the other VPC.
You will need to whitelist in security groups to allow access fro the instances that you want to be able to connect.
By doing this you also benefit from any network connections traversing the AWS backbone rather than across the public internet which will lead to improvements for security and performance.
We are business that essentially provides proxy servers. our servers currently run on AWS Singapore.
But we have Malaysian clients who would like to be able to view Malaysian content from streaming services (e.g., netflix, iflix) through our servers. At the moment, they can either only view Singaporean content or nothing at all as the streaming services detect Singaporean IPs being used with Malaysian user accounts.
Does AWS have a service for us to register instance IPs as being in a different country than where the AWS server farm is?
AWS does not have "a service for us to register instance IPs as being in a different country".
However, an AWS account can launch an Amazon EC2 instance in any region around the world and it will receive a public IP address that is (usually) mapped to that part of the world. If you use that EC2 instance, it will "appear" be be in that part of the world (because it is!).
However, please note that many online services block Amazon EC2 instance IP address ranges to prevent such practices.
I have an Amazon EC2 instance and a corresponding RDS instance that I want to keep private. I'd like to keep it so that only myself and the sysadmin can access these instances. I don't want to provide access to other developers.
However, one of my developers is working on a project right now where he needs to create/configure his own EC2/RDS instance. I could have my sysadmin perform this work, but I'd rather have the developer do it for the sake of expediency.
Is there any way to configure a group/role/policy in a way that allows me to keep my current instances private from the new developer, but would allow him to create his own EC2 and RDS instances?
Your question appears to be mixing several security concepts, such as 'private', 'group/role/policy' and 'firewalls'.
An Amazon EC2 instance has several layers of security:
First, there is the ability to login to the EC2 instance. This is managed by you, typically by creating users on the instance (in either Linux or Windows) and associating a password or Public/Private. Only people who have login credentials will be able to access the instance.
Second, there is the ability to reach the instance. Security Groups control which ports are open from which IP address range. Therefore, you could configure a security group to only make the instance accessible from your own IP address or your own private network. Your instance might also be in a private subnet that has no Internet connectivity. This again restricts access to the instance.
A person can therefore only login to an instance if they have login credentials, if the security group(s) permit access on the protocol being used (RDP or SSH) and if the instance is reachable by the user from the Internet or private network.
Similarly, an Amazon Relational Database Service (RDS) instance is protected by:
Login credentials: A master user login is created when the database is launched, but additional users can be added via normal CREATE USER database commands
Security Groups: As with EC2 instances, security groups control what ports are open to a particular range of IP addresses
Network security: As with EC2 instances, an RDS instance can be placed in a private subnet, which is not accessible from the Internet.
Please note that none of the above controls involve Identity and Access Management (IAM) users/groups/roles/policies, which are used to grant access to AWS services, such as the ability to launch an Amazon EC2 instance or an Amazon RDS instance.
So, the fact that you have existing Amazon EC2 and Amazon RDS instances has no impact on the security of any other instances that you choose to launch. If a user cannot access your existing services, then launching more services will not change that situation.
If you wish to give another person the ability to launch new EC2/RDS instances, you can do this by applying an appropriate policy on their IAM User entity. However, you might want to be careful about how much permission you give them, because you might also be granting them the ability to delete your existing instances, change the master password, create and restore snapshots (thereby potentially accessing your data) and change network configurations (potentially exposing your instances to the Internet).
When granting IAM permissions to somebody, it is recommend that you grant least privilege, which means you should only give them the permissions they need and nothing more. If you are unsure about how much permission to give them or how to configure these permissions, you would be wise to have your System Administrator create the instances on their behalf. That way, you are fully aware of what has been done and you have not potentially exposed your systems.
Ok, the best explanation about how things works is in #John Rotestein response. But here a few practical suggestions (that must be considered as an complement to John's response):
You can create separate subnets and give permissions to your
developers to run instances in only one of the the subnets using IAM
Policies; But your developers still can reach your instance and you
must configure so/db/application restrictions.
If your company DO NOT use a shared gateway to the internet, you can
define the Network ACL to limit the access to your exclusive network
using your IP address. If you use a shared gateway, you will not be
able to use this solution;
In the second case, one way to limit the access is put your instance
in a private subnet and create a bastion host in your public subnet
to be used only by you (this solution must be configured to your RDS
instances too). The bastion host will be reachable by your
developers, but you can use a specific Key Pair that only you have
access. Just keep in mind that your instances and RDS will not be
available to the internet;
But I think that the simple solution would be create different
VPC's, one for your team and the other for the development team. In
this solution you can restrict access to all VPC resources to your
developers in the "main" VPC. Off course this also means no internet connection to your instances.