I noticed that LakeFormation SDK calls (both boto3 via AWS CLI, and Go via Terraform) are returning ALL in the list of permissions assigned to a resource.
for example:
"PrincipalResourcePermissions": [
{
"Principal": {
"DataLakePrincipalIdentifier": "arn:aws:iam::ACCOUNT:role/FooRole"
},
"Resource": {
"Table": {
"CatalogId": "ACCOUNT",
"DatabaseName": "lf_test",
"Name": "foo"
}
},
"Permissions": [
"ALL",
"DESCRIBE"
],
"PermissionsWithGrantOption": []
}
Yet, I cannot delete this "ALL" permission. Attempting to revoke with either AWS CLI or Terraform results in error:
An error occurred (InvalidInputException) when calling the RevokePermissions operation: No permissions revoked. Grantee does not have:[ALL]
What's going on here, and how do I fix it other than special-casing to ignore "ALL"?
Related
I am logged in as LeadDeveloperRole in aws console and created a secret in secrets manager. I want this secret to be only accessible to
LeadDeveloperRole and AdminRole, so i used below mentioned resource policy on this secret. While saving this policy it shows an error saying:
"This resource policy will not allow you to manage this secret in the future."
As per my understanding, Deny + NotPrincipal implies apart from LeadDeveloperRole and AdminRole, no one will have access to this.
Am i missing something here ?
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":{
"AWS":[
"arn:aws:iam::111111111:role/LeadDeveloperRole",
"arn:aws:iam::111111111:role/AdminRole"
]
},
"Action": [
"secretsmanager:*"
],
"Resource":"arn:aws:secretsmanager:region:111111111:secret:secretid-xxxx1i"
},
{
"Effect":"Deny",
"NotPrincipal":{
"AWS":[
"arn:aws:iam::111111111:role/LeadDeveloperRole",
"arn:aws:iam::111111111:role/AdminRole"
]
},
"Action": [
"secretsmanager:*"
],
"Resource":"arn:aws:secretsmanager:region:111111111:secret:secretid-xxxx1i"
}
]
}
UPDATED:
updated the policy with explicit allow which is giving same error.
Try adding the account principal to the list of NotPrincipals, as without it a request can be blocked e.g. "arn:aws:iam::111111111:root" or just the account ID number.
From the docs:
When you use NotPrincipal with Deny, you must also specify the account ARN of the not-denied principal.
AWS provides a way through its IAM policies to limit access from a particular user/role to a specific named resource.
For example the following permission:
{
"Sid": "ThirdStatement",
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::confidential-data",
"arn:aws:s3:::confidential-data/*"
]
}
will allow all List* and Get* operations on the confidential-data bucket and its contents.
However, I could not find such an option when going through GCP's custom roles.
Now, I know that for GCS buckets (which is my use case) you can create either ACLs to achieve (more or less?) the same result.
My question is, assuming I create a service account identified by someone#myaccount-googlecloud.com and I want this account to have read/write permissions to gs://mybucket-on-google-cloud-storage, how should I format the ACL to do this?
(for the time being, it does not matter to me whatever other permissions are inherited from the organization/folder/project)
From documentation:
Grant the service account foo#developer.gserviceaccount.com WRITE access to the bucket example-bucket:
gsutil acl ch -u foo#developer.gserviceaccount.com:W gs://example-bucket
Grant the service account foo#developer.gserviceaccount.com READ access to the bucket example-bucket:
gsutil acl ch -u foo#developer.gserviceaccount.com:R gs://example-bucket
The format for ACL is as below
{
"bindings":[
{
"role": "[IAM_ROLE]",
"members":[
"[MEMBER_NAME]"
]
}
]
}
Please refer to the Google Docs
e.g.
{
"kind": "storage#policy",
"resourceId": "projects/_/buckets/bucket_name",
"version": 1,
"bindings": [
{
"role": "roles/storage.legacyBucketWriter",
"members": [
"projectEditor:projectname",
"projectOwner:projectname"
]
},
{
"role": "roles/storage.legacyBucketReader",
"members": [
"projectViewer:projectname"
]
}
],
"etag": "CAE="
}
I am attempting to deploy a CloudFormation template that pulls in some parameters from SSM using the method described in this blog-post: https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/
The relevant excerpt from the Parameters section of the CF template is:
"ZoneName" : {
"Type" : "AWS::SSM::Parameter::Value<String>",
"Description" : "DNS Hostname Zone",
"Default" : "/Deimos/ZoneName"
},
"ZoneId" : {
"Type" : "AWS::SSM::Parameter::Value<String>",
"Description" : "DNS Hostname Zone",
"Default" : "/Deimos/ZoneId"
},
However, I'm getting the following error when I attempt to deploy it (via CodePipeline):
Action execution failed
AccessDenied. User doesn't have permission to call ssm:GetParameters (Service: AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request ID: d6756fbe-fd41-4ac5-93bd-56e5b397445e)
I've got a Role and Policy setup for CloudFormation that includes the following section to grant access to some parameter namespaces within SSM:
{
"Sid": "XonoticCFFetchParameters",
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"ssm:GetParameter"
],
"Resource": [
"arn:aws:ssm:*:<aws account #>:parameter/Deimos/*",
"arn:aws:ssm:*:<aws account #>:parameter/Installers/*",
"arn:aws:ssm:*:<aws account #>:parameter/Xonotic/*"
]
},
These seem to have been applied just fine, based on the use of
aws iam simulate-principal-policy --policy-source-arn "arn:aws:iam::<aws account #>:role/Xonotic-CloudFormationDeploy" --action-names "ssm:getParameters" --resource-arns "arn:aws:ssm:*:<aws account #>:parameter/Deimos/ZoneName"
{
"EvaluationResults": [
{
"EvalActionName": "ssm:getParameters",
"EvalResourceName": "arn:aws:ssm:*:<aws account #>:parameter/Deimos/ZoneName",
"EvalDecision": "allowed",
"MatchedStatements": [
{
"SourcePolicyId": "Xonotic-Deployment",
"StartPosition": {
"Line": 3,
"Column": 19
},
"EndPosition": {
"Line": 16,
"Column": 10
}
}
],
"MissingContextValues": []
}
]
}
So, the Role I'm using should have the access needed to fetch the parameter in question, but it's not working and I'm out of things to check.
Ok - so in this case it turns out there was a JSON parameters file that was part of the build pipeline that was overriding one of my parameters with an invalid value (it was putting the actual zone name in ZoneName).
Fixed that and parameters are now being passed to my build process just fine.
Every time I run the command
aws rekognition detect-labels --image "S3Object={Bucket=BucketName,Name=picture.jpg}" --region us-east-1
I get this error.
InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the DetectLabels operation: Unable to get image metadata from S3. Check object key, region and/or access permissions.
I am trying to retrieve labels for a project I am working on but I can't seem to get past this step. I configured aws with my access key, secret key, us-east-1 region, and json as my output format.
I have also tried the code below and I receive the exact same error (I correctly Replaced BucketName with the name of my bucket.)
import boto3
BUCKET = "BucketName"
KEY = "picture.jpg"
def detect_labels(bucket, key, max_labels=10, min_confidence=90, region="eu-west-1"):
rekognition = boto3.client("rekognition", region)
response = rekognition.detect_labels(
Image={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
MaxLabels=max_labels,
MinConfidence=min_confidence,
)
return response['Labels']
for label in detect_labels(BUCKET, KEY):
print "{Name} - {Confidence}%".format(**label)
I am able to see on my user account that it is calling Rekognition.
Image showing it being called from IAM.
It seems like the issue is somewhere with my S3 bucket but I haven't found out what.
Region of S3 and Rekognition should be the same for stability reasons.
More info: https://forums.aws.amazon.com/thread.jspa?threadID=243999
Kindly check your IAM Role Policies/Permissions, Also check the same for the role created for the lambda function. It's better to verify the policy using IAM Policy Checker.
I am facing a similar issue, This might due to the Permissions and Policy attached with the IAM Roles and with S3 Bucket. Need to check the metadata as well for the objects in S3 bucket.
My S3 bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1547200240036",
"Statement": [
{
"Sid": "Stmt1547200205482",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::459983601504:user/veral"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::esp32-rekognition-459983601504/*"
}
]
}
Cross-origin resource sharing (CORS):
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
If you use Server Side encryption for the bucket via KMS, remember to also have/give access to IAM role to decrypt using the KMS
In Account A I created a s3 bucket with cloudformation, and a CodeBuild builds an artifact and uploads to this bucket. In Account B I try to create a stack with cloudformation, and use the artifact from Account A's bucket to deploy my Lambda function. But, I get an Access Denied error. Does anyone know the solution? Thanks...
"TestBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"AccessControl": "BucketOwnerFullControl"
}
},
"IAMPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "TestBucket"
},
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
},
"/*"
]
]
},
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
}
]
]
}
]
}
]
}
}
}
Assuming that the xxxxx in below statement is the account number of Account B:
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
You are saying that this bucket grants the access to Account B on the basis of IAM permissions/policies held by them in Account B IAM service.
So essentially all the users/instance profile/policy that have explicit S3 access will be able to access this bucket of Account A. This means that perhaps the IAM policy that you are attaching to the lambda role in Account B doesn't have explicit S3 access.
I would suggest giving S3 access to your Lambda function and this should work.
Please be aware that in future if you want to write to S3 bucket of Account A from Account B, you would have to make sure that you put the bucket-owner-full-control acl so that the objects are available across all the accounts.
Example:
Using CLI:
$ aws s3api put-object --acl bucket-owner-full-control --bucket my-test-bucket --key dir/my_object.txt --body /path/to/my_object.txt
Instead of "arn:aws:iam::xxxxxxxxxxxx:root" granting access to the root role only, try granting access to all identities in the account by specifying just the account ID as the item within the Principal/AWS object: "xxxxxxxxxxxx".
See Using a Resource-based Policy to Delegate Access to an Amazon S3 Bucket in Another Account for more details.