Databricks AWS account setup - AWS storage with error - Missing permissions: PUT, LIST, DELETE - amazon-web-services

I have created a PREMIUM trail Databricks account with AWS. I have setup AWS account with user access keys.
And for configuring AWS storage followed the below instructions in the URL(setup bucket policy as below in below URL).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Databricks Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::98765432101:root"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-databricks-user-bucket/*",
"arn:aws:s3:::my-databricks-user-bucket"
]
}
]
}
https://docs.databricks.com/administration-guide/account-settings/aws-storage.html
But, I am getting the error as below.
The provided S3 bucket is valid, but have insufficient permissions to
launch a Databricks deployment. Please double check your settings
according to the tutorial. Missing permissions: PUT, LIST, DELETE
In the above bucket policy which I used, PUT, LIST, DELETE policies are there. Still facing the above error.
Note: As trail and error, changed the Action as below which allows all actions. But, still getting the same error.
"Action": "*"

The above error is caused because of the mistake I did when I am setting up Databricks account with AWS.
As part of setting up AWS account details in Databricks, a cross-account-role should be created (alternative is through access key). When creating the role, AWS account id should be given(Databricks AWS account id). The value of that is 414351767826.
The mistake I did was I gave my AWS account ID instead of Databricks one. Following as it is in the below URL will work as expected.
The same issue I did when I am setting AWS storage. Following the documentation as it is will work perfectly.
https://docs.databricks.com/administration-guide/account-settings/aws-accounts.html

Related

CLIENT_ERROR: authorization failed for primary source and source version

I opened a free AWS account to learn and created an Administrator user group and user in IAM for myself.
I am following a tutorial "Automating your API testing with AWS CodeBuild, AWS CodePipeline, and Postman."
I am getting the error CLIENT_ERROR: authorization failed for primary source and source version in the DOWNLOAD_SOURCE phase of the Build transition in CodePipeline.
I followed the directions in an earlier post at AWS CodeBuild failed CLIENT_ERROR: authorization failed for primary source and source version with no success.
I added and attached a policy for connection-permissions in my service role as directed like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codestar-connections:UseConnection",
"Resource": "insert connection ARN here"
}
]
}
Later, I changed the Action above to
"codepipeline:GetPipelineState"
I added and attached a policy for GitPull like so:
{
"Action": [
"codecommit:GitPull"
],
"Resource": "*",
"Effect": "Allow"
},
I have disconnected and reconnected my connection to GitHub and also tried creating a new personal access token with no success.
I have tried changing my S3 to public and Allow with
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
I also tried updating my node in the source code to 16.18.0.
I am stuck. The resources I have found keep pointing me to the same AWS page I mentioned. I don't know what else to do. I would appreciate any help.
My repo is located at https://github.com/venushofler/my-aws-codepipeline-codebuild-with-postman.git
The answer to the above was to add a default set of access permissions to my users, groups, and roles in my account. I found documentation at https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html which in part stated, "To add a default set of CodeBuild access permissions to an IAM group or IAM user, choose Policy Type, AWS Managed, and then do the following:
To add full access permissions to CodeBuild, select the box named AWSCodeBuildAdminAccess, choose Policy Actions, and then choose Attach. "
This worked to allow the Build and Deploy stage to succeed.

AWS Backup: Missing permission iam:PassRole

I'm currently spinning in circles trying to restore from an AWS Backup and am running into permissions errors. I have administrator access to my AWS account. I've tried creating a new policy and attach it to my user account in IAM as follows:
The issue I can't seem to get around is that I need to add the permission iam:PassRole but I can't seem to find it anywhere within the AWS portal. How can I add this permission to my policy?!
EDIT: I've created a policy with all backup permissions allowed and including iam:PassRole however I am still receiving the error message You are not authorized to perform this operation. when trying to perform the backup. The policy I've created and attached to my user looks as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"backup:*",
"iam:PassRole",
"iam:GetRole"
],
"Resource": "*"
}
]
}
“To successfully do a restore with the original instance profile, you will need to make changes to the restore policy. If you apply instance profile during the restore, you must update the operator role and add PassRole permissions of the underlying instance profile role to EC2. Otherwise, Amazon EC2 won’t be able to authorize the instance launch and it will fail.”
Here is the policy you can attach to the AWS default Backup role “AWSBackupDefaultServiceRole” to work around this issue:
{
"Version": "2012–10–17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::<Account-ID>:role/*"
}]}
Source: https://medium.com/contino-engineering/new-aws-backup-features-for-quick-and-easy-ec2-instance-recovery-c8887365ca6a

aws s3 upload fail only at production envrionment, but success at local environment

I tried to upload image using aws-sdk, multer-s3.
In my local environment, uploading image was succeed, but in production environment(aws lambda), it fail with error status 403 forbidden.
But my aws credential key and secret-key is same as local environment. also i checked aws key in production environment successfully.
I think difference between two other environment is nothing.What am I missing?
I have even tried setting aws key in my router code like below, but it also failed.
AWS.config.accessKeyId = 'blabla';
AWS.config.secretAccessKey = 'blalbla';
AWS.config.region = 'ap-northeast-2';
and here is my policy
{
"Id": "Policy1536755128154",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1536755126539",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::elebooks-image/*",
"Principal": "*"
}
]
}
Update your attached s3 bucket policy to a user according to below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET",
"arn:aws:s3:::YOUR-BUCKET/*"
]
}
]
}
it's working on my server.
I haven't worked with AWS Lambda but I am familiar with S3. When you're using the AWS SDK in your local environment, you're probably using the root user with default full access, so it will just work.
With Lambda however, according to the following extract from the documentation, you need to make sure that the IAM role you specified when you created the Lambda function has the appropriate permissions to do an s3:putObject to that bucket.
Permissions for your Lambda function – Regardless of what invokes a Lambda function, AWS Lambda executes the function by assuming the IAM role (execution role) that you specify at the time you create the Lambda function. Using the permissions policy associated with this role, you grant your Lambda function the permissions that it needs. For example, if your Lambda function needs to read an object, you grant permissions for the relevant Amazon S3 actions in the permissions policy. For more information, see Manage Permissions: Using an IAM Role (Execution Role).
See Writing IAM policies: How to grant access to an S3 bucket

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.

Amazon S3 Bucket Policy: How to lock down access to only your EC2 Instances

I am looking to lock down an S3 bucket for security purposes - i'm storing deployment images in the bucket.
What I want to do is create a bucket policy that supports anonymous downloads over http only from EC2 instances in my account.
Is there a way to do this?
An example of a policy that I'm trying to use (it won't allow itself to be applied):
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket name]",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-east-1:[my account id]:instance/*"
}
}
}
]
}
Just to clarify how this is normally done. You create a IAM policy, attach it to a new or existing role, and decorate the ec2 instance with the role. You can also provide access through bucket policies, but that is less precise.
Details below:
S3 buckets are default deny except for my the owner. So you create your bucket and upload the data. You can verify with a browser that the files are not accessible by trying https://s3.amazonaws.com/MyBucketName/file.ext. Should come back with error code "Access Denied" in the xml. If you get an error code of "NoSuchBucket", you have the url wrong.
Create an IAM policy based on arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess. Starts out looking like the snip below. Take a look at the "Resource" key, and note that it is set to a wild card. You just modify this to be the arn of your bucket. You have to do one for the bucket and its contents so it becomes: "Resource": ["arn:aws:s3:::MyBucketName", "arn:aws:s3:::MyBucketName/*"]
Now that you have a policy, what you want to do is to decorate your instances with a IAM Role that automatically grants it this policy. All without any authentication keys having to be in the instance. So go to Role, create new role, make an Amazon EC2 role, find the policy you just created, and your Role is ready.
Finally you create your instance, and add the IAM role you just created. If the machine already has its own role, you just have to merge the two roles into a new one for the machine. If the machine is already running, it wont get the new role until you restart.
Now you should be good to go. The machine has the rights to access the s3 share. Now you can use the following command to copy files to your instance. Note you have to specify the region
aws s3 cp --region us-east-1 s3://MyBucketName/MyFileName.tgz /home/ubuntu
Please Note, the term "Security through obscurity" is only a thing in the movies. Either something is provably secure, or it is insecure.
I used something like
{
"Version": "2012-10-17",
"Id": "Allow only My VPC",
"Statement": [
{
"Sid": "Allow only My VPC",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject", "s3:ListBucket",
"Resource": [
"arn::s3:::{BUCKET_NAME}",
"arn::s3:::{BUCKET_NAME}/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "{VPC_ID}" OR "aws:sourceVpce": "{VPCe_ENDPOINT}"
}
}
}
]
}