Export data from Amazon Aurora to S3 using outfile? - amazon-web-services

I am trying to export data from a table in my Aurora database to a bucket I have created in my S3, using the "SELECT INTO OUTFILE S3" command that is promoted by Amazon here.
Select * from testauroradb.'table1' into outfile S3 's3://data-dump-bucket/Data';
When I try to run the above line I receive the following error:
Error Code: 1045. Access denied for user 'username'#'%' (using password: YES)
According to some forms this is due to the user not having permissions and that they need to be granted by the root user. However this is the root user and according to some documentation provided by Amazon, the root user has the ability to perform the "SELECT INTO S3" command. I have also checked and can verify that the user does have the ability to run the "SELECT INTO S3" command. (I know it is not good practice to use the root user but this is only a test database).
I also created an IAM role and policy to have access to the S3 and have linked it to the Aurora Database. Policy for access to S3:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:ListBucket",
"s3:DeleteObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::data-dump-bucket/Data"
]
}
]
}
I attached the policy to an IAM role. Then I added the Role ARN to the parameter aws_default_s3_role in the parameter group that is attached to the Aurora Cluster.
Following some forums only, some people had success changing the outbound rules for the security groups to "TYPE:SSH, Port:22, Destination:0.0.0.0/0". But this didn't work for me either. If anyone can tell me what to do or what I have done wrong I would appreciate it.

I was running into the same issue. I added AmazonS3FullAccess policy to my IAM role and it worked.

Run this command on you database
GRANT LOAD FROM S3 ON . TO 'user'#'domain-or-ip-address'

Related

SageMaker Studio domain creation fails due to KMS permissions

Question
Please help understand the cause and solution for the problem.
Problem
SageMaker Studio domain creation fails due to KMS permissions. The IAM Role specified to the SageMaker arn:aws:iam::316725000538:role/SageMaker has the permissions for KMS required as specified in https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html.
Domain creation failed
Unable to create Amazon EFS for domain 'd-1dq5c9rpkswy' because you don't have permissions to use the KMS key 'arn:aws:kms:us-east-2:316725000538:key/1e2dbf9d-daa0-408d-a290-1633b615c54f'. See https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html for required permissions for CreateDomain action.
tells the IAM permissions
IAM Permission for CreateDomain action
Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference
The IAM permission required for the CreateDomain action have been attached to the IAM role.
I had the same problem when trying to use the aws/s3 key. I created my own Customer Managed Key (CMK) and it worked just fine.
I think it's related to the AWS assigned policy on the aws/s3 key.
This part:
"Condition": {
"StringEquals": {
"kms:CallerAccount": "120455730103",
"kms:ViaService": "s3.us-east-1.amazonaws.com"
}
I don't think SageMaker meets the kms:ViaService condition.
Apart from SageMakerFullAccess we need to create a new policy and attach that to your user.
Create a new policy with below json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sagemaker:CreateUserProfile",
"sagemaker:CreateModel",
"sagemaker:CreateLabelingJob",
"sagemaker:CreateFlowDefinition",
"sagemaker:CreateDomain",
"sagemaker:CreateAutoMLJob",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateNotebookInstance",
"sagemaker:CreateCompilationJob",
"sagemaker:CreateImage",
"sagemaker:CreateMonitoringSchedule",
"sagemaker:RenderUiTemplate",
"sagemaker:UpdateImage",
"sagemaker:CreateHyperParameterTuningJob"
],
"Resource": "*"
}
]
}

Databricks AWS account setup - AWS storage with error - Missing permissions: PUT, LIST, DELETE

I have created a PREMIUM trail Databricks account with AWS. I have setup AWS account with user access keys.
And for configuring AWS storage followed the below instructions in the URL(setup bucket policy as below in below URL).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Databricks Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::98765432101:root"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-databricks-user-bucket/*",
"arn:aws:s3:::my-databricks-user-bucket"
]
}
]
}
https://docs.databricks.com/administration-guide/account-settings/aws-storage.html
But, I am getting the error as below.
The provided S3 bucket is valid, but have insufficient permissions to
launch a Databricks deployment. Please double check your settings
according to the tutorial. Missing permissions: PUT, LIST, DELETE
In the above bucket policy which I used, PUT, LIST, DELETE policies are there. Still facing the above error.
Note: As trail and error, changed the Action as below which allows all actions. But, still getting the same error.
"Action": "*"
The above error is caused because of the mistake I did when I am setting up Databricks account with AWS.
As part of setting up AWS account details in Databricks, a cross-account-role should be created (alternative is through access key). When creating the role, AWS account id should be given(Databricks AWS account id). The value of that is 414351767826.
The mistake I did was I gave my AWS account ID instead of Databricks one. Following as it is in the below URL will work as expected.
The same issue I did when I am setting AWS storage. Following the documentation as it is will work perfectly.
https://docs.databricks.com/administration-guide/account-settings/aws-accounts.html

Could not create role AWSCodePipelineServiceRole

I'm trying to auto-deploy my static websites Github changes to my s3 bucket and when I went to create the pipeline, it threw a "Could not create role AWSCodePipelineServiceRole" error.
My github has permissions setup correctly. The repo name, bucket name, and object key are correct.
Has anyone ever encountered this?
I resolved this issue by:
Step 1: adding the deployment user I was logged on as into a
Deployers Group, to which I granted the IAMFullAccess policy.
Step 2: I successfully created the pipeline by following the same
steps as indicated by the AWS tutorial.
Step 3: once create, I
reversed engineered the group and single policy attached to it that
the wizard created. It showed a really long policy that you can't
really invent. The IAM section being:
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Resource": "*",
I am just concerned that the Deployers group I created now has IAMFullAccess...
Also, I found that if you are logged as an admin, and add privileges to an IAM user, that user may not immediately enjoy these new privileges. I decided to log out and log back in to commit them. Maybe there is a lighter way, but I couldn't find it.
The reason behind the issue was that your IAM user (the user you are logged in as) is restricted to create role with service role name 'AWSCodePipelineServiceRole'.
In order to provide IAM user permission to create role with service role name ‘AWSCodePipeline*’ e.g. ‘AWSCodePipelineServiceRole-us-east-1-test’, you need to attach the below policy to your IAM user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:CreateRole",
"Resource": "arn:aws:iam::*:role/AWSCodePipeline*"
}
]
}
Try couple of things:
Try to create the IAM role with different name (e.g. AWSCodePipelineServiceRole2020).
Give the pipeline a different name and keep the role name as it is
(auto generated) by pipline.
I hope this will help.
I had to add these 4 policies to get the CodePipeline creation issue fixed.
"iam:CreateRole",
"iam:CreatePolicy",
"iam:AttachRolePolicy",
"iam:PassRole"

Access s3 bucket from different aws account

I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.