Access AWS resources from awscli with new IAM admin user - amazon-web-services

I'm trying to access AWS resources from the aws cli after configuring a new IAM user with admin rights and it acts as if no resource is available.
What I did:
Created an RDS instance (while being logged in with the root user);
Created a new IAM user to a group that has the AdministratorAccess[*] policy added;
Configured the aws cli to use the named user's access keys and the same region as the RDS instance;
Ran the command aws rds describe-db-instances. The result is:
$ aws rds describe-db-instances
{
"DBInstances": []
}
I would have expected to see my RDS instance listed. Am I missing something?
[*] The policy json contains this:
"Effect": "Allow",
"Action": "*",
"Resource": "*"

I do not think IAM privileges is an issue here since there is no error. Is the region in the default profile for AWS credentials configured correctly?
Or try specifying the region explicitly?
aws rds describe-db-instances --region eu-west-2
If it doesn't work, then the CLI is getting the credentials from somewhere else.

Related

Is it possible to pull image from ECR in a EC2 without using docker login?

I'm now having a private ECR repo and a EC2 instance. If I would like to pull the image from the private ECR in my local machine, I have to setup my AWS credential by using aws configure and perform a docker login.
And now, I want to pull image from the EC2 instance. When I am trying to run docker command directly, it told me to authenticate first. Is it possible to attach IAM role to my EC2 instance and skip the docker login or aws ecr login workflow?
At this moment, I can only run aws configure inside the EC2 instance, and it seems need an extra IAM user which I am trying to avoid.
You don't have to run aws configure in on EC2 machine, in fact this would a bad security practice. You should attach an AWS role which allows the EC2 instance to fetch image and more importantly, be abel to grab the authorization token for the ECR registry. For example, you can create a policy with the following permissions to have read-only access to ECR images:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:GetAuthorizationToken",
"ecr:ListImages"
],
"Resource": "*"
}
]
}
Using this policy, create a new IAM service role and attach it attached to the EC2 instance.
Now, even if you have this role attached, you will have to authenticate the Docker CLI using an authorization token.
In addition to the other answers posted here stating you should use the EC2 IAM role instead of configuring a role with aws configure, I also suggest installing the Amazon ECR Credential Helper on your EC2 instance. Then you won't have to perform a docker login.
I have to setup my AWS credential by using aws configure and perform a docker login.
You don't have to. If your code runs on EC2, you should use instance IAM role instead of regular setup of aws credentials using aws configure.

Cross account access for AWS accounts using Direct Connect

I have been working with AWS for a number of years, but I am not very strong with some of the advanced networking concepts.
So, we have multiple AWS accounts. None of them have public internet access, but we use Direct Connect for on-prem to AWS connection.
I have a S3 bucket in Account A.
I created an IAM user in Account A along with a access/secret key and granted this IAM user s3:PutObject permission to the S3 bucket.
I write a simple Python script to list the objects in this bucket from on-prem, it works, as expected.
I then execute the same Python script on an EC2 instance running in Account B, I get "botocore.exceptions.ClientError: An error occured (AccessDenied) when calling the ListObjects operation: Access Denied".
Do I need to create VPC endpoint for S3 in Account B? Does cross account IAM role come into play here?
Your situation is:
You have an Amazon S3 Bucket-A in Account-A
You have an IAM User (User-A) in Account-A
You have an Amazon EC2 instance running in Account-B
From that EC2 instance, you wish to access Bucket-A
It also appears that you have a means for the EC2 instance to access Amazon S3 endpoints to make API calls
Assuming that the instance is able to reach Amazon S3 (which appears to be true because the error message refers to permissions, which would have come from S3), there are two ways to authenticate for access to Bucket-A:
Option 1: Using the IAM User from Account-A
When making the call from the EC2 instance to Bucket-A, use the IAM credentials created in Bucket-A. It doesn't matter that the request is coming from an Amazon EC2 instance in Account-B. In fact, Amazon S3 doesn't even know that. An API call can come from anywhere on the Internet (including your home computer or mobile phone). What matters is the set of credentials provided when making the call.
If you are using the AWS Command-Line Interface (CLI) to make the call, then you can save the User-A credentials as a profile by using aws configure --profile user_a (or any name), then entering the credentials from the IAM User in Account-A. Then, access Amazon S3 with aws s3 ls --profile user_a. Using a profile like this allows you to switch between credentials.
Option 2: Using a Bucket Policy
Amazon S3 also has the ability to specify a Bucket Policy on a bucket, which can grant access to the bucket. So, if the EC2 instance is using credentials from Account-B, you can add a Bucket Policy that grants access from those Account-B credentials.
Let's say that the Amazon EC2 instance was launched with an IAM Role called role-b, then you could use a Bucket Policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-B>:role/role-b"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-a/*"
}
]
}
Disclaimer: All of the above assumes that you don't have any weird policies on your VPC Endpoints / Amazon S3 Access Points or however the VPCs are connecting with the Amazon S3 endpoints.

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

IAM role to access services of another AWS account

For security reasons, we have a dev, QA, and a prod AWS account. We are using IAM roles for instances. This is working correctly per account basis.
Now the recruitment here is we want to access multiple aws services {such as S3, SQS, SNS, EC2,etc.} on one of EC2 instance of QA account from Dev aws account.
We have created STS policy and role allowing Trusted entities as another AWS account, but somehow not able to attach this role to EC2 instance.
Example STS policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::546161XXXXXX:role/AdminAccessToAnotherAccount"
}
}
AdminAccessToAnotherAccount: This aws policy on another account with admin access.
This role is not listed while attaching to the ec2 instance.
It appears that your situation is:
You have an EC2 instance in Account-1
An IAM Role ("Role-1") is assigned to the EC2 instance
You want to access resources in Account-2 from the EC2 instance
The following steps can enable this:
Create an IAM Role in Account-2 ("Role-2") with the permissions you want the instance to receive
Add a Trust policy to Role-2, trusting Role-1
Confirm that Role-1 has permission to call AssumeRole on Role-2
From the EC2 instance using Role-1, call AssumeRole on Role-2
It will return a set of credentials (Access Key, Secret Key, Token)
Use those credentials to access services in Account-2 (via aws configure --profile foo or an API call).
If use aws configure, you will also need to manually edit the ~/.aws/credentials file to add the aws_session_token to the profile, since it is not requested by the CLI command.
Examples:
Using Temporary Security Credentials to Request Access to AWS Resources
Cross-Account Access Control with Amazon STS for DynamoDB

AWS SDK/CLI access error with EC2 Instance credentials for aws redshift create-cluster

I have an IAM role associated with my EC2 instances with the following policy regarding Redshift:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"redshift:*"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
However, from my EC2 instance with either the AWS SDK or CLI, I am unable to create a cluster.
Both
InstanceProfileCredentialsProvider creds = new InstanceProfileCredentialsProvider(false);
AmazonRedshift redshift = AmazonRedshiftClientBuilder.standard().withCredentials(creds).build();
CreateClusterRequest request = new CreateClusterRequest();
request.setAllowVersionUpgrade(true);
request.setClusterType("multi-node");
request.setClusterIdentifier("identifier");
request.setDBName("dbname");
request.setIamRoles(Collections.singleton("iam_role_arn")));
request.setPubliclyAccessible(true);
request.setNodeType("dc1.8xlarge");
request.setNumberOfNodes(2);
request.setPort(8192);
request.setMasterUsername("username");
request.setMasterUserPassword("Password1");
Cluster cluster = redshift.createCluster(request);
and
aws redshift create-cluster --cluster-identifier identifier --master-username username --master-user-password Password1 --node-type dc1.8xlarge --region us-west-2 --number-of-nodes 2
result in:
An error occurred (UnauthorizedOperation) when calling the CreateCluster operation: Access Denied. Please ensure that your IAM Permissions allow this operation.
Using the IAM policy simulation tool I was able to confirm that my instance role has the permissions to create a Redshift cluster.
Any help understanding this would be appreciated.
If you are able to call other Redshift operations and it is only failing on createCluster(), then the error is probably being caused by the IAM Role that is being passed to Redshift.
You need to Grant a User Permissions to Pass a Role to an AWS Service.
The logic behind it is this:
Let's say you do not have access to S3 bucket X
Let's say there is a role in your account that does have access to bucket X
You could launch a Redshift cluster with the role and then use it to access bucket X
To prevent people cheating like this, they need PassRole permission that states whether they are allowed to pass a role to Redshift and even which roles they are allowed to pass.
The same applies to passing roles to an Amazon EC2 instance.