According to the datomic documentation, I have created a VPC and put my elastic beanstalk application in the same vpc as the datomic system. However, when I connect to the database in my server on the elastic beanstalk, I get the following error:
Forbidden to read keyfile at s3://humboi-march-2021-storagef7f305e7-1h3lt-s3datomic-1650q253gkqr1/humboi-march-2021/datomic/access/dbs/db/humboi-march-2021/read/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
How do I fix this?
If you are using static programmatic access credentials, make sure the user associated with those credentials has permission to interact to the bucket where the key file is and permission to call s3:GetObject (check the policy on the user in the IAM console)
Make sure your bucket policy is not denying the identity tied to those credentials from calling s3:GetObject (check the bucket's bucket policy in the S3 console)
If the credentials are tied to the Beanstalk service role (ambient), ensure the role has permission to s3:GetObject (check the Beanstalk service role in the IAM console)
If the ambient credentials are inferred from an EC2 instance that was created by Beanstalk, make sure the instance role has permission to call s3:GetObject. (check the EC2 instance role in the IAM console)
Related
I have running app on auto-scaled ec2 env. of account1 created via AWS CDK (it also should have support to be run on multiple regions). During the app execution I need to get object from account2's s3.
One of the ways to get s3 data is use tmp credentials(via sts assume role):
on account1 side create a policy for ec2 instance role to assume sts tmp credentials for s3 object
on account2 side create a policy GetObject access to the s3 object
on account2 site create role and attach point2's policy to it + trust relationship to account1's ec2 role
Pros: no user credentials are required to get access to the data
Cons: after each env update requires manual permission configuration
Another way is to create a user in account2 with permission to get s3 object and put the credentials on account1 side.
Pros: after each env update doesn't require manual permission configuration
Cons: Exposes IAM user's credentials
Is there a better option to eliminate manual permission config and explicit IAM user credentials sharing?
You can add a Bucket Policy on the Amazon S3 bucket in Account 2 that permits access by the IAM Role used by the Amazon EC2 instance in Account 1.
That way, the EC2 instance(s) can access the bucket just like it is in the same Account, without have to assume any roles or users.
Simply set the Principal to be the ARN of the IAM Role used by the EC2 instances.
On my client's AWS account, security credentials are generated everytime we login to their AWS sandbox account. This credentials file is automatically generated and downloaded via a Chrome plugin(SAML to AWS STS Key Conversion).
We then have to place the generated content to the ./aws/credentials file inside an EC2 instance in the same AWS account. This is little inconvenient as we have to update the generated credentials and session_token into the credentials file inside the EC2 instance every time we launch a Terraform script.
Is there any way we can attach any role so that we can just use the EC2 instance without entering the credentials into the credentials file.
Please suggest.
Work out what a reasonable, minimal set of permissions the Terraform script needs to create its AWS resources, then create an IAM role with those permissions, then add that IAM role to the instance (or launch a new instance with the role). Don't have a ~/.aws/credentials file on the instance or it will take precedence over the IAM role-based credentials.
Everywhere I can see IAM Role is created on EC2 instance and given Roles like S3FullAccess.
Is it possible to create IAM Role on S3 instead of EC2? And attach that Role to S3 bucket?
I created IAM Role on S3 with S3FULLACCESS. Not able to attach that to the existing bucket or create a new bucket with this Role. Please help
IAM (Identity and Access Management) Roles are a way of assigning permissions to applications, services, EC2 instances, etc.
Examples:
When a Role is assigned to an EC2 instance, credentials are passed to software running on the instance so that they can call AWS services.
When a Role is assigned to an Amazon Redshift cluster, it can use the permissions within the Role to access data stored in Amazon S3 buckets.
When a Role is assigned to an AWS Lambda function, it gives the function permission to call other AWS services such as S3, DynamoDB or Kinesis.
In all these cases, something is using the credentials to call AWS APIs.
Amazon S3 never requires credentials to call an AWS API. While it can call other services for Event Notifications, the permissions are actually put on the receiving service rather than S3 as the requesting service.
Thus, there is never any need to attach a Role to an Amazon S3 bucket.
Roles do not apply to S3 as it does with EC2.
Assuming #Sunil is asking if we can restrict access to data in S3.
In that case, we can either Set S3 ACL on the buckets or the object in it OR Set S3 bucket policies.
I have set up the Codedeploy Agent, however when I run it, I get the error:
Error: HEALT_CONSTRAINTS
By going further , this is the entry in the code deploy log from the EC2 instance:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::S3::Errors::AccessDenied - Access Denied
I have done a simple wget from the bucket and it results:
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|xxxxxxxxx|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
On the opposite, if I use the AWS cli I can correctly reach the S3 bucket.
The EC2 instance is on a VPC, it has a role associated with full permission on S3, firewall settings inbound and outbound seem correct. So it is obviously something related to permissions in accessing from https.
The questions:
Under which credentials Code Deploy Agent runs ?
What permissions or roles have to be set on S3 bucket ?
The EC2 instance's credentials (the instance role) will be used when pulling from S3.
To be clear, the Service Role that CodeDeploy needs does not need S3 permissions. The ServiceRole CodeDeploy needs allows CodeDeploy to call AutoScaling & EC2 APIs to describe the instances so CodeDeploy knows how to deploy to them.
That being said, for your AccessDenied issue for S3, there are 2 things you need to check
The role that the EC2 instance(s) has s3:Get* and s3:List* (or more specific) permissions
The S3 bucket you want to deploy has a policy attached that allows the EC2 instance role to get the object.
Documentation for permissions: http://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html#instances-ec2-configure-2-verify-instance-profile-permissions
CodeDeploy uses "Service Roles" to access AWS resoures. In the AWS console for CodeDeploy, look for "Service role". Assign the IAM role that you created for CodeDeploy in your application settings.
If you have not created a IAM role for CodeDeploy, do so and then assign it to your CodeDeploy application.
I've been reading up on configuring cloudwatch log service, however the docs say that you must attatch a permission to the IAM role of your instance. If I already have an instance running that doesn't have an IAM role attached, what options do I have as far as configuring this service?
You can clone your current instance into a new EC2 instance that has an IAM instance profile (role) assigned.
Stop your EC2 instance.
Create an AMI image of your EC2 instance.
Launch a new EC2 instance from your AMI image, this time assigning an IAM role.
If the instance was not launched without an IAM role, then:
Create a policy (not an inline policy) as specified in the document
Add a test IAM user and attach the policy to the test_user
From the IAM dashboard, download or copy the test_user security credentials (key and secret)
On your instance, use aws configure and configure the credentials by using the key and secret
It may look complicated but it is not.