How to set the user for the AWS CLI - amazon-web-services

After installing awscli (the AWS command line tool), when I try to run it. I get this message in the terminal:
$ aws dynamodb describe-table --table-name MyTable
An error occurred (AccessDeniedException) when calling the DescribeTable operation:
User: arn:aws:iam::213352837455:user/someuser is not authorized to
perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:ap-northeast-1:213352837455:table/MyTable
$
But I don't know why I am considered logged as someuser at this moment (in the terminal in particular, but even in AWS).
someuser is only one of the few users I have set on AWS, a while ago.
What is the way to get logged in as the right user, to use awscli?

If you are running the AWS Command-Line Interface (CLI) on an Amazon EC2 instance that has been assigned a role, then the CLI can use the permissions associated with that role.
If you are not running on an EC2 instance, then you can provide credentials via a credentials file (~/.aws/credentials) or an environment variable.
The easiest way to configure the credentials is:
$ aws configure
See: Configuring the AWS CLI

Maybe your old credentials are still stored in ~/.aws.
Log in with correct credentials
aws configure
For more info see Configuring the AWS CLI
in official documentation.

Related

initiate aws ssm from jenkins in one account to ec2 in another instance for data transfer

Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.

How to run an AWS CLI: Elastic Beanstalk Wait command in Azure DevOps

The structure of the wait command is:
$ aws <command> wait <subcommand> [options and parameters]
However in DevOps it only seems to support:
$ aws <command> <subcommand> [options and parameters]
See example below where there is a Command and Subcommand. Where does the Wait go? I'm trying to run this command https://awscli.amazonaws.com/v2/documentation/api/latest/reference/elasticbeanstalk/wait/environment-updated.html
I had to set the Subcommand to wait and move the environment-updated down into the Options and parameters
It looks that you want be able to do this using extension. However, you have aws CLI installed on the agent so what you need is to setup few variables and then call your commands from powershell step.
Supply standard AWS environment variables in the build agent process
You can specify credentials with standard named AWS environment variables. These variables can be used to get credentials from a custom credentials store.
The following are all the supported standard named AWS environment variables:
AWS_ACCESS_KEY_ID – IAM access key ID.
AWS_SECRET_ACCESS_KEY – IAM secret access key.
AWS_SESSION_TOKEN – IAM session token.
AWS_ROLE_ARN – Amazon Resource Name (ARN) of the role you want to assume.
AWS_REGION – AWS Region code, for example, us-east-2.
You can also create a feature request on github to support wait command by extension.

AWS - ECS load S3 files in entrypoint script

Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.

User Data script to call aws cli

I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command
/usr/bin/aws s3 cp ...
The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.
Running the command with sudo after the instance has started works fine.
I have run aws configure both with sudo and without.
I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.
If possible, I would also like to avoid writing the credentials into the script.
How can I configure awscli in such a way that the credentials are used when running a user data script?
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.
See: IAM Roles for Amazon EC2
Clear/delete AWS credentials configurations from your instance and create an AMI
Create a policy that has the minimum privileges to run your script
Create a IAM role and attach the policy you just created
Attach the IAM role when you launch the instance (very important)
Have your userdata script call /usr/bin/aws s3 cp ... without supplying credentials explicitly or using credentials file
You can configure your EC2 instance to receive a pre-defined IAM Role that has its credentials "baked-in" to the instance that it fetches upon instantiation, which it turn will allow it to call aws-cli commands in your User Data script without the need to configure credentials at all.
Here's more info on IAM Roles for EC2:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
It's worth noting here that you'll need to attach the appropriate policies to the IAM Role that you assign to your instance in order for the aws-cli commands to succeed. More information on that can be found here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles

Running AWS CLI commands as ec2-user

I'm trying to use the AWS CLi for the first time, and I am doing it through putty by SSHing to the ec2 instance.
I want to run a command like "aws ec2 authorize-security-group-ingress [options]"
But I get the following error: "A client error (UnauthorizedOperation) occurred when calling the AuthorizeSecurityGroupIngress operation: You are not authorized to perform this operation."
I believe that this is related to IAM user credentials. I have found out where to create IAM users, however I still don't understand how this helps me to execute this command when I'm logged into the server as ec2-user or root, or run the command through CRON.
I have done a fair amount of reading regarding the access controls on AWS in their documentation, but I seem to be missing something.
How can I allow the command to be executed from within the AWS instance?
The missing information I was looking for is the command: aws configure
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json