start-backup-job issue with permission - amazon-web-services

I hope someone could help me with this. I am experiencing a weird issue while using AWS cli for starting on-demand backup.
I already have some backup jobs running for EC2 instances. However, for some automation i wanted to have on-demand backups as well. For said reason, when i am trying to backup using cli i am getting error.
An error occurred (AccessDeniedException) when calling the StartBackupJob operation: Insufficient privileges to perform this action.
The command i am using is;
aws backup start-backup-job --backup-vault-name primary --resource-arn arn:aws:ec2:eu-west-1:123456789:volume/vol-0abcdef1234 --iam-role-arn arn:aws:iam::123456789:role/service-role/AWSBackupDefaultServiceRole --region eu-west-1
The user i am using here has administrator access to the account.
Can someone please help me? I am out of options here.

As you can assign AccessPolicy to the Backup Vault check if you have any policy assigned to the vault you are trying to access. Both should be allowing IAM Policy with your admin user as well as the Resource-Based Policy assigned to your Backup Vault
Setting Access Policies on Backup Vaults and Recovery Points
I did not have any policy assigned to the Backup Vault and was able to create the backup, Plus I also have Admin access like you.
$ aws backup start-backup-job --backup-vault-name primary \
--resource-arn arn:aws:ec2:us-east-1:1234567890:volume/vol-04a514599941274c3 \
--iam-role-arn arn:aws:iam::1234567890:role/service-role/AWSBackupDefaultServiceRole --region us-east-1
{
"BackupJobId": "5435950f-2be1-4177-92dc-7bsddsdd",
"CreationDate": "2021-02-04T16:25:03.370000+01:00"
}
How can I use the AWS CLI to create an AWS Backup plan or run an on-demand job?
Last but not least check your environment if the credentials you think should be used are actually getting used by using sts get-caller-identity

Related

Unable to change IAM role for Oracle RDS

I have an oracle rds instance that I had an S3_integration iam role for, I removed it using terraform, but it was never deleted on the instance itself it seems.
Now I am unable to change, delete or add any s3_integration roles to the instance.
Attempting to use terraform or the UI to change the name, or delete it have been unsuccessful, has anyone had this happen? How can this be fixed? I cannot find any information about why the role is invalid, and attempting to upload a dump using rdsadmin_s3_tasks.upload_to_s3 command shows me this error "[ERROR] The DB instance doesn't have credentials to access the specified Amazon S3 bucket. To grant access, add the S3_INTEGRATION role to the DB instance."
I've rebooted the database but it has no effect.
Solved by removing the iam role using the aws cli, the UI didn't have the role but it could be found by describing the aws instances.
It was then removed using:
aws rds remove-role-from-db-instance \
--db-instance-identifier db_name \
--role-arn arn:aws:iam::xxxxxx:role/rds-s3-datadump-role \
--feature-name S3_INTEGRATION

Insufficient access AWS whilst using AWS CLI

I've been trying to access a project in AWS devicefarm using AWS CLI.
Steps taken:
Downloaded the AWS CLI tool
Configured my credentials according to: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html using aws configure command
executed aws devicefarm list-uploads --arn myProjectArn
and what i get is this error:
An error occurred (AccessDeniedException) when calling the ListUploads operation:
User: arn:aws:iam::replacingANumber:user/myUserName is not authorized to perform: devicefarm:ListUploads
on resource:
arn:aws:devicefarm:us-west-2:replacingANumber:project:replacingALongString with an explicit deny
The docs:https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html say i'm missing permissions, but devOps team in my company says i have all the permissions.
What am I missing?
Either misconfigured AWS CLI or insufficient permissions.
This can be 2 things:
Your AWS CLI is misconfigured. Make sure that when you run aws sts get-caller-identity, you get the same role as the one that the devops team claims to have the correct permission. Also, make sure that your default region is us-west-2.
If the above is correctly setup, then it comes from the permissions defined in the IAM policy. If you are able to view the policy associated with your user/role, you can check out the policy simulator to figure out which permission is missing.

Service role EMR_DefaultRole has insufficient EC2 permissions

While creating AWS EMR cluster, always i get the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
And the cluster terminates automatically, have even done steps as per aws documentation of recreating emr specific roles, but no progress please guide how to resolve the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
EMR needs two roles to start the cluster 1) EC2 Instance profile role 2)EMR Service role. The service role should have enough permissions to provision new resources to start the cluster, EC2 instances, their network etc. There could be many reasons for this common error:
Verify the resources and their actions. Refer https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-role.html.
Check if you are passing the tag that signifies if cluster needs to use emr managed policy.
{
"Key": "for-use-with-amazon-emr-managed-policies",
"Value": "true"
}
At last try to find out the exact reason from cloud trail. Go to aws>cloud trail. From the event history configuration enable the error code so that you can see the exact error. If you find the error code something like 'You are not authorized to perform this operation. Encoded authorization failure message'. Then open the event history details, pick up the encrypted error message and decrypt using aws cli
aws sts decode-authorization-message message. This will show you the complete role details, event, resources, action. Compare it with AWS IAM permissions and you can find out the missing permission or parameter that you need to pass while creating the job flow.

Unable to retrieve secret from secretsmanager on aws-ec2 using an IAM role

Goal: Retrieve secret from secretsmanager on an aws ec2 instance programmatically through command line.
I have created an IAM role with policies that grant full-access to AWSSecretsManager and AWSEC2instance also to assume the role and modify the role of any aws ec2 instance.
I created an aws instance and attached the IAM role to it and executed the following steps:
- aws secretsmanager list-secrets
An error occurred (UnrecognizedClientException) when calling the ListSecrets operation: The security token included in the request is invalid.
I get an error. I am able to retrieve the security credentials using the metadata of the instance.
- Am I missing something here? I basically want to retrieve the secret in an aws instance in a secure way.
- When I try to run the above command to list-secrets. The cli complains that it needs an region. My ec2-instance and secrets all are in us-east-2. So, I use the same region. And it still does not work.
Any suggestions/pointers would be highly appreciated. Thanks!
Here is How I would troubleshoot.
check whether the instance is aware of the IAM role attached to that.
aws sts get-caller-identity
try passing the region to the command
aws secretsmanager list-secrets --region us-east-2
I would check whether the AWS_REGION or AWS_DEFAULT_REGION, but even if these values are set, passing --region should override it.
Hope this help you get somewhere.
Have you run "aws configure" on the instance? Sounds like it might be using the token in there rather that the EC2 instance role. See references below for the sequence it checks but basically, the EC2 role is the last place it looks, if it gets credentials earlier, it will use them.
See here for the priority/sequence: https://docs.aws.amazon.com/amazonswf/latest/awsrbflowguide/set-up-creds.html
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html ("Using the Default Credential Provider Chain")

Permissions for creating and attaching EBS Volume to an EC2Resource i AWS Data Pipeline

I need more local disk than available to EC2Resources in an AWS Data Pipline. The simplest solution seems to be to create and attach an EBS volume.
I have added EC2:CreateVolume og EC2:AttachVolume policies to both DataPipelineDefaultRole and DataPipelineDefaultResourceRole.
I have also tried setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for an IAM role with the same permissions in the shell, but alas no luck.
Is there some other permission needed, is it not using the roles it says it uses or is this not possible at all?
The Data Pipeline ShellCommandActivity with has a script uri point to a shell script that executes this command:
aws ec2 create-volume --availability-zone eu-west-1b --size 100 --volume-type gp2 --region eu-west-1 --tag-specifications 'ResourceType=volume,Tags=[{Key=purpose,Value=unzip_file}]'
The error I get is:
An error occurred (UnauthorizedOperation) when calling the CreateVolume operation: You are not authorized to perform this operation.
I had completely ignored the encrypted authorization message, thinking it was just some internal AWS thing. Your comment made me take a second look, kdgregory. Turns out the reference to the CreateVolume was somewhat of a red herring.
Decrypting the message, I see that it fails with "action":"ec2:CreateTags" meaning it lacks the permission to create tags. I added this permission and it works now.