There are many AMIs created on AWS using some periodic job for backups. I have root credentials and need to delete old AMIs to clean up the space.
But whenever I run deregister command I get the error:
An error occurred (AuthFailure) when calling the DeregisterImage operation: Not authorized
And the ownerId is different from the root user id.
Is there some way or method I can delete those AMIs using AWS CLI?
Related
Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.
While debugging this question, I went on and
In IAM console at https://console.aws.amazon.com/iam/
1.1. Deleted one role (CodeDeployServiceRole).
1.2. Created a service role.
In S3 console at https://console.aws.amazon.com/s3/
2.1. Emptied and deleted one bucket (tiagocodedeploylightsailbucket).
2.2. Created a new bucket in EU London (eu-west-2).
Back into the IAM console at https://console.aws.amazon.com/iam/
3.1. Deleted one policy (CodeDeployS3BucketPolicy).
3.2. Created a new policy.
Stay in the IAM console at https://console.aws.amazon.com/iam/
4.1. Delete one user (LightSailCodeDeployUser)
4.2. Created a new user (with that same name).
Navigate to the Lightsail home page at https://lightsail.aws.amazon.com/
5.1. Deleted previous instance (codedeploy).
5.2. Created one new instance with Amazon Linux (Amazon_Linux_1) (Note that if I use Amazon Linux 2 then would reach this problem),
using the script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS_KEY
aws_secret_access_key: SECRET_KEY
iam_user_arn: arn:aws:iam::525221857828:user/LightSailCodeDeployUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
Checked that CodeDeploy agent is running and then when running the following command in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/LightSailCodeDeployUser --region eu-west-2
I get
An error occurred (IamUserArnAlreadyRegisteredException) when calling
the RegisterOnPremisesInstance operation: The on-premises instance
could not be registered because the request included an IAM user ARN
that has already been used to register an instance. Include either a
different IAM user ARN or IAM session ARN in the request, and then try
again.
Even though I deleted the user, created one with the same name and then deleted the other existing instance, the IAM User ARN is still the same
arn:aws:iam::525221857828:user/LightSailCodeDeployUser
To fix it, I've gone back to step 4 and created a user with a different name; then, updated the script for the instance creation, checked if the CodeDeploy agent is running and now when running in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/GeneralUser --region eu-west-2
I get the expected result
Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.
After installing awscli (the AWS command line tool), when I try to run it. I get this message in the terminal:
$ aws dynamodb describe-table --table-name MyTable
An error occurred (AccessDeniedException) when calling the DescribeTable operation:
User: arn:aws:iam::213352837455:user/someuser is not authorized to
perform: dynamodb:DescribeTable on resource: arn:aws:dynamodb:ap-northeast-1:213352837455:table/MyTable
$
But I don't know why I am considered logged as someuser at this moment (in the terminal in particular, but even in AWS).
someuser is only one of the few users I have set on AWS, a while ago.
What is the way to get logged in as the right user, to use awscli?
If you are running the AWS Command-Line Interface (CLI) on an Amazon EC2 instance that has been assigned a role, then the CLI can use the permissions associated with that role.
If you are not running on an EC2 instance, then you can provide credentials via a credentials file (~/.aws/credentials) or an environment variable.
The easiest way to configure the credentials is:
$ aws configure
See: Configuring the AWS CLI
Maybe your old credentials are still stored in ~/.aws.
Log in with correct credentials
aws configure
For more info see Configuring the AWS CLI
in official documentation.
I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command
/usr/bin/aws s3 cp ...
The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.
Running the command with sudo after the instance has started works fine.
I have run aws configure both with sudo and without.
I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.
If possible, I would also like to avoid writing the credentials into the script.
How can I configure awscli in such a way that the credentials are used when running a user data script?
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.
See: IAM Roles for Amazon EC2
Clear/delete AWS credentials configurations from your instance and create an AMI
Create a policy that has the minimum privileges to run your script
Create a IAM role and attach the policy you just created
Attach the IAM role when you launch the instance (very important)
Have your userdata script call /usr/bin/aws s3 cp ... without supplying credentials explicitly or using credentials file
You can configure your EC2 instance to receive a pre-defined IAM Role that has its credentials "baked-in" to the instance that it fetches upon instantiation, which it turn will allow it to call aws-cli commands in your User Data script without the need to configure credentials at all.
Here's more info on IAM Roles for EC2:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
It's worth noting here that you'll need to attach the appropriate policies to the IAM Role that you assign to your instance in order for the aws-cli commands to succeed. More information on that can be found here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles