Supplying non-AWS credentials to EC2 instance on launch - amazon-web-services

We have an EC2 instance is coming up as part of autoscaling configuration. This instance can retrieve AWS credentials using the IAM role assigned to it. However, the instance needs additional configuration to get started, some of which is sensitive (passwords to non-EC2 resources) and some of which is not (configuration parameters).
It seems that the best practice from AWS is to store instance configuration in IAM and retrieve it at run-time. The problem I have with this approach is that configuration is sitting unprotected in S3 bucket - incorrect policy may expose it to parties who were never meant to see it.
What is a best practice for accomplishing my objective so that configuration data stored in S3 is also encrypted?
PS: I have read this question but it does not address my needs.

[…] incorrect policy may expose it to parties who were never meant to see it.
Well, then it's important to ensure that the policy is set correctly. :) Your best bet is to automate your deployments to S3 so that there's no room for human error.
Secondly, you can always find a way to encrypt the data before pushing it to S3, then decrypt it on-instance when the machine spins-up.

AWS does not provide clear guidance on this situation, which is a shame. This is how I am going to architect the solution:
Developer box encrypts per-instance configuration blob using the
private portion of asymmetric keypair and places it in an S3 bucket.
Restrict access to S3 bucket using IAM policy.
Bake public portion of asymmetric keypair into AMI.
Apply IAM role to EC2 instance and launch it from AMI
EC2 instance is able to download configuration from S3 (thanks to IAM role) and decrypt it (thanks to having the public key available).
The private key is never shared sent to an instance so it should not be compromised. If the public key is compromised (e.g. if the EC2 instance is rooted), then the attacker can decrypt the contents of the S3 bucket (but at that point they already have root access to the instance and can read configuration directly from the running service).

Related

DynamoDB access From EC2

I have a container running on EC2 which is currently running in a public VPC ( It cannot be changed right now) and, in order for this resource to access DynamoDB, I have created a user, limiting its access to my table in Dynamo and then I created access keys to use in my API calls.
My idea is store these secrets in secret manager and use its SDK from my EC2 to then perform the operations I want to.
However, it just seem like a lot of effort and, creating a specific user just to limit the permissions does not seem right for me.
Am i in the right way? What would be the most interesting approach to access the Dynamo programmatically from my EC2 ?
I have read somewhere that I could grant role permissions so my EC2 could access my Dynamo.
Does that make sense?
Note: I have an ECS working along my EC2
I am new to AWS and used to work a lot with Azure but mostly with serverless applications where I could easily used the Identity Management feature to grant those permissions.
The details were all mentioned above.
I think it would be better to create an instance-profile, define it with the permissions you want for dynamodb, which is pretty much like an iam role and then when you start the instance, use that role. That means, you do not need to store credentials and this is generally the recommended way to access services from an instance over using access keys.
Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I have done some search and found this article, it matches exactly your case (EC2 + Dynamo DB)
https://awstip.com/using-aws-iam-roles-with-ec2-and-dynamodb-7beb09af31b9
And yes for EC2, the correct approach is to create an IAM role and attach to your instance
Also the following command can be used to retrieve the associated credentials (AWS Key + AWS Secret) that are used by that IAM role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<name-of-iam-role>

What is standard practice for storing private SSH keys for AWS Lambda

My lambda function is responsible for ssh connecting to some of our EC2 instances. Currently I just have our key file stored in the lambda's deployment package, but this is obviously not desirable solution for production. I have already researched a couple ways, such as storing the key in a private S3 bucket, and storing it as an encrypted environment variable. However, I'm not thrilled about pulling the key from the S3 bucket all the time, and the encrypted environment variable seems like something that wouldn't persist across future lambda functions as well. What are some other industry standard ways of storing private keys for lambda use?
You can store encrypted secrets in Secrets Manager or in Parameter Store. For certain types of secrets, you can have them auto-rotated in Secrets Manager. Limit which IAM roles have access to the secrets and you can reduce potential misuse.
Also, be aware of options available to avoid the need to SSH to EC2 instances:
SSM Run Command
EC2 Instance Connect
SSM Session Manager

Export Data from AWS EC2 windows instance hosted in one account to S3 to another account

I am trying to explore more on AWS S3 side and got one doubt. Is it possible to export data from EC2 windows instance hosted in one account to S3 bucket hosted in another AWS account? I know one way is using external modules like tntdrive where I can map S3 as mounted drive and export data. Looking for another good solution if S3 provides, so if someone knows this, please place your suggestions.
I think all you would need in your EC2 instance is access to the AWS credentials of the other account, then you could copy your file from your EC2 instance to an S3 bucket for the second account. You may also be able to do it if you grant the identity associated with the EC2 instance rights to an S3 bucket owned by the second account - then you could just write to that bucket "yourself" ...

AWS S3 Bucket Accessible from my ElasticBeanStalk Instance only

I have 1 s3 bucket, 1 elasticbeanstalk instance. Currenly my s3bucket is made public hence its accessible from any domain, even from my localhost.
I want that all my s3 bucket resources should be accessible from my EBS instance only where my APP is hosted/running. My app should be able to view these resources and upload new images/resources to this bucket .
I am sure somebody myt have done this.
Controlling access to S3 has several ways. The best practice to make something privately accessible is: not to give any rights to access your S3 buckets/files in the bucket policy.
However you should create an IAM role which has either a full access to S3, or limited access to some actions, some buckets.
For every EC2 instances and also to every Elastic Beanstalk environments, you can attache an IAM Role. This role will be automatically served to your instances via instance metadata. This is a safe way to give special rights to your instances.
(Note: This is an AWS security best practice, since AWS will deal with the key rotations on your EC2 boxes.)

How do we provide our AWS app with access to customers' resources without requiring their secret key?

I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.