I'm trying to transfer a domain name from one AWS account to another AWS account using AWS CLI. When I try to transfer the domain I get the following error:
Connect timeout on endpoint URL: "https://route53domains.eu-west-1.amazonaws.com/"
I'm using the following command to transfer the domain
aws route53domains transfer-domain-to-another-aws-account --domain-name <value> --account-id <value> --profile personal
I checked the aws config file and it looks fine. It looks like this:
[profile personal]
aws_access_key_id = somekey
aws_secret_access_key = somesecretkey
region = us-west-2
I've also made sure that the user has the correct permission. The user has the following permission
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "route53domains:*",
"Resource": "*"
}
]
}
and also has the AdministratorAccess AWS managed policy.
To make sure I can communicate with AWS. I ran a simple command aws s3 ls --profile personal and it works. AWS responds back with the contents of S3.
The version of AWS CLI I have installed is
aws-cli/2.0.18 Python/3.7.4 Darwin/19.4.0 botocore/2.0.0dev22
I'm not sure where I'm going wrong.
You will need to specify --region us-east-1 because Amazon Route 53 is a global service.
I was going to rebuff the answer given by John Rotenstein but, on closer examination, it is indeed correct. It could do with some more detail though, so I shall elaborate.
The OP didn't miss anything, this need for --region us-east-1 (and only us-east-1) to be included in the command is not mentioned in either the route53domains docs or the docs of the subcommand, transfer-domain-to-another-aws-account. It does pop up on the list-operations page but even there, it's not as noticable as it could be^^.
^^: You might ask yourself, why it isn't builtin to the awscli to default to us-east-1 for this set of commands anyway, since there are no other options?
Related
I've currently writing a Terraform module for EC2 ASGs with ECS. Everything about starting the instances works, including IAM-dependent actions such as KMS-encrypted volumes. However, the ECS agent fails with:
Error getting ECS instance credentials from default chain: NoCredentialProviders: no valid providers in chain.
Unfortunately, most posts I find about this are about the CLI being run without credentials configured, however this should of course use the instance profile.
The posts I do find regarding that are about missing IAM policies for the instance role. However, I do have this trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
(Added ECS because someone on SO had it in there, I don't think that's right. Also removed some conditions.)
It has these policies attached:
AmazonSSMManagedInstanceCore
AmazonEC2ContainerServiceforEC2Role
When I connect via SSH and try to run awscli commands, I get the error
Unable to locate credentials. You can configure credentials by running "aws configure".
running any command.
But with
curl http://169.254.169.254/latest/meta-data/iam/info
I see the correct instance profile ARN and with
curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
I see temporary credentials. With these in the awscli configuration,
aws sts get-caller-identity
returns the correct results. There are no iptables rules or routes blocking access to the metadata service and I've deactivated IMDSv2 token requirement for the time being.
I'm using the latest stable version of ECS-optimized Amazon Linux 2.
What might be the issue here?
We are asked to upload a file to client's S3 bucket; however, we do not have AWS account (nor we plan on getting one). What is the easiest way for the client to grant us access to their S3 bucket?
My recommendation would be for your client to create an IAM user for you that is used for the upload. Then, you will need to install the AWS cli. On your client's side there will be a user that the only permission they have is to write to their bucket. This can be done pretty simply and will look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::the-bucket-name/*",
"arn:aws:s3:::the-bucket-name"
]
}
]
}
I have not thoroughly tested the above permissions!
Then, on your side, after you install the AWS cli you need to have two files. They both live in the home directory of the user that runs your script. The first is $HOME/.aws/config. This has something like:
[default]
output=json
region=us-west-2
You will need to ask them what AWS region the bucket is in. Next is $HOME/.aws/credentials. This will contain something like:
[default]
aws_access_key_id=the-access-key
aws_secret_access_key=the-secret-key-they-give-you
They must give you the region, the access key, the secret key, and the bucket name. With all of this you can now run something like:
aws s3 cp local-file-name.ext s3://the-client-bucket/destination-file-name.ext
This will transfer the local file local-file-name.ext to the bucket the-client-bucket with the file name there of destination-file-name.ext. They may have a different path in the bucket.
To recap:
Client creates an IAM user that has very limited permission. Only API permission is needed, not console.
You install the AWS CLI
Client gives you the access key and secret key.
You configure the machine that does the transfers with the credentials
You can now push files to the bucket.
You do not need an AWS account to do this.
I deployed my django project using AWS Elastic beanstalk and S3,
and I tried to upload the profile avatar but it shows Server Error(500)
My Sentry log shows me,
"An error occurred (IllegalLocationConstraintException) when calling the PutObject operation: The eu-south-1 location constraint is incompatible for the region specific endpoint this request was sent to."
I think this error appeared
because I put my bucket on eu-south-1 but I try to access it and to create a new object in Seoul, Korea.
Also, the AWS document said IllegalLocationConstraintException indicates that you are trying to access a bucket from a different Region than where the bucket exists. To avoid this error, use the --region option. For example: aws s3 cp awsexample.txt s3://testbucket/ --region ap-east-1.
(https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html)
but this solution might be just for when upload a file from AWS CLI...
I tried to change my bucket policy by adding this but doesn't work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::{BucketName}/*"
}
]
}
I don't know what should I do and why they do not allow access from other regions?
How to allow access to create, update and remove an object in my bucket from all around the world?
This is my first deployment please help me🥲
Is your Django Elastic Beanstalk instance in a different region from the S3 bucket? If so, you need to set the AWS_S3_REGION_NAME setting as documented here.
When trying to create an apprunner service using aws apprunner create-service --cli-input-json file://./myconfig.json, I get the error in title:
An error occurred (InvalidRequestException) when calling the CreateService operation: Error in assuming access role arn:aws:iam::1234:role/my-role
The myconfig.json I'm using is fairly similar to example json from AWS CreateService docs, & I don't think it's particularly relevant here.
The error seems to imply I should assume the role... but I've already assumed the role with this command from this stackoverflow q/a:
eval $(aws sts assume-role --role-arn arn:aws:iam::1234:role/my-role --role-session-name apprunner-stuff1 --region us-east-1 | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
This runs without error & when I run:
aws sts get-caller-identity
it outputs the following which looks correct I think:
{
"UserId": "SOME1234NPC:apprunner-stuff1",
"Account": "1234",
"Arn": "arn:aws:sts::1234:assumed-role/my-role/apprunner-stuff1"
}
At this point, the error message doesn't make sense & I'm wondering what dumb IAM thing am I doing wrong?
Apprunner specific wise - I've attempted to to give my-role all the permissions from AppRunner IAM doc to run CreateService, but I could easily have missed some. The error message here doesn't seem to indicate that the role doesn't have sufficient permissions, but might be relevant.
Instead of trying to create a role following IAM doc permissions, I followed the UI AppRunner guide here. That created a role that was auto named AppRunnerECRAccessRole. I used that role as my AccessRoleArn in the json configuration, making that json config section look like:
"AuthenticationConfiguration": {
"AccessRoleArn": "arn:aws:iam::12345:role/service-role/AppRunnerECRAccessRole"
},
I followed this stackoverflow q/a to allow my user / group to assume the AppRunnerECRAccessRole, with a policy applied to the user/group like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::12345:role/my-role",
"arn:aws:iam::12345:role/service-role/AppRunnerECRAccessRole"
]
}
]
}
After this I was just able to run:
aws apprunner create-service --cli-input-json file://./myconfig-with-ui-role-arn.json
& it worked! (without even assuming the role via eval command). Though I gave the user access to both roles, creating only worked via the new AppRunnerECRAccessRole role. So I think the takeaway / main answer is to create an AppRunner service via UI & then reuse its service role.
I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.