I am getting a 403 PERMISSION_DENIED response from GCP when running the deployment manager to create a deployment that creates a service account and sets IAM policy for it using the cloud resource manager API. Here is the setIamPolicy template for this:
{
'resources': [
{
'name': context.env['name'],
'action': 'gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy',
'properties': {
'resource': context.properties['resource'],
'policy': {
'bindings': context.properties['bindings']
}
}
}
]
}
Response from GCP:
'{"ResourceType":"gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"The
caller does not have permission","status":"PERMISSION_DENIED","statusMessage":"Forbidden","requestPath":"https://cloudresourcemanager.googleapis.com/v1/projects/project-name#project-name.iam.gserviceaccount.com:setIamPolicy","httpMethod":"POST"}}'
FYI: The robot account (12345#cloudservices.gserviceaccount.com) is given project owner permissions in IAM.
The right way to do this is:
{
# Set the IAM policy by patching the existing policy with the
# config contents.
'name': policy_add_name,
'action': 'gcp-types/cloudresourcemanager-v1:cloudresourcemanager.projects.setIamPolicy',
'properties':
{
'resource': project_id,
'policy': '$(ref.' + policy_get_name + ')',
'gcpIamPolicyPatch': {
'add': policies_to_add,
}
}
}
Related
When I create a record in my hosted zone via the AWS Web Console, I can select the Routing Policy as "Simple".
When I try to create the same record programmatically via boto3, I seem to have no option to set a Routing Policy, and it is "Latency" by default.
What am I missing?
r53.change_resource_record_sets(
HostedZoneId=hz_id,
ChangeBatch={
'Changes': [{
'Action': 'UPSERT',
'ResourceRecordSet': {
'Name': root_domain,
'Type': 'A',
'Region': region,
'AliasTarget': {
'DNSName': f's3-website.{region}.amazonaws.com',
'EvaluateTargetHealth': False,
'HostedZoneId': s3_hz_id,
},
'SetIdentifier': str(uuid.uuid4())
}
}]
}
)
removing region and SetIdentifier works for me here - can't explain it though :)
I'm trying to run a docker image on Cloud Run with the Terraform code below:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-214771"
region = "asia-northeast1"
}
resource "google_cloud_run_service" "default" {
name = "hello-world"
location = "asia-northeast1"
template {
spec {
containers {
image = "gcr.io/myproject-214771/hello-world:latest"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
Then, it was successful to run the docker image:
But when I access the URL, it shows this:
Error: Forbidden Your client does not have permission to get URL /
from this server
Are there any mistakes in my Terraform code?
Add(Copy & paste) this code below to your Terraform code to allow unauthenticated invocations for public API or website:
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
So this is the full code:
provider "google" {
credentials = file("myCredentials.json")
project = "myproject-214771"
region = "asia-northeast1"
}
resource "google_cloud_run_service" "default" {
name = "hello-world"
location = "asia-northeast1"
template {
spec {
containers {
image = "gcr.io/myproject-214771/hello-world:latest"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
Finally, your URL shows your website properly:
Moreover, now "Authentication" is "Allow unauthenticated":
Don't forget to add the role "Cloud Run Admin" to your service account:
Otherwise, you cannot allow unauthenticated invocations for public API or website then you will get this error below:
Error setting IAM policy for cloudrun service
"v1/projects/myproject-214771/locations/asia-northeast1/services/hello-world":
googleapi: Error 403: Permission 'run.services.setIamPolicy' denied on
resource
'projects/myproject-214771/locations/asia-northeast1/services/hello-world'
(or resource may not exist).
Moreover, with these roles below, you cannot allow unauthenticated invocations for public API or website:
Only the role "Cloud Run Admin" can allow unauthenticated invocations for public API or website.
most likely you need to give the service account "Cloud Run Admin" access, it needs run.services.setIamPolicy permission to change the settings on the new cloud run
Compute environments created via boto3 are not displayed in AWS console. I can see them in the batch_client.describe_compute_environments() call response:
{
'computeEnvironmentName': 'name',
'computeEnvironmentArn': 'arn:aws:batch:us-east-1:<ID>:compute-environment/ml-retraining-compute-env-second',
'ecsClusterArn': 'arn:aws:ecs:us-east-1:<ID>:cluster/ml-retraining-compute-env-second_Batch_b18fcd09-8d7e-351b-bc0f-13ffa83a6b15',
'type': 'MANAGED',
'state': 'ENABLED',
'status': 'INVALID',
'statusReason': "CLIENT_ERROR - The security group 'sg-2436d85c' does not exist",
'computeResources': {
'type': 'EC2',
'minvCpus': 0,
'maxvCpus': 512,
'desiredvCpus': 24,
'instanceTypes': [
'optimal'
],
'subnets': [
'subnet-fa22de86'
],
'securityGroupIds': [
'sg-2436d85c'
],
'instanceRole': 'arn:aws:iam::<ID>:instance-profile/ecsInstanceRole',
'tags': {
'component': 'ukai-training-pipeline',
'product': 'Cormorant',
'jira_project_team': 'CORPRJ',
'business_unit': 'Threat Systems Products',
'created_by': 'ml-pipeline'
}
},
'serviceRole': 'arn:aws:iam::<ID>:role/AWSBatchServiceRole'
}
but the Compute Environments table on the Batch page in AWS console UI does not show anything. The table is empty. When I try to create compute environment with the same name again via boto3 call, I get this response:
ERROR - Error setting compute environment: An error occurred
(ClientException) when calling the CreateComputeEnvironment operation: Object already exists.
Based on the comments, the issue was the use of different region in the console.
The solution was to change the region.
I have an AWS ElasticSearch domain in eu-west-1 region and have taken a snapshot to an S3 bucket sub folder also in the same region.
I have also deployed a second AWS ElasticSearch domain in another aws region - eu-west-2.
Added an S3 bucket replication between the buckets but when I try to register the repository on the eu-west-2 AWS ES domain, I get the following error:
500
{"error":{"root_cause":[{"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists"}],"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists","caused_by":{"type":"amazon_s3_exception","reason":"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 14F0571DFF522922; S3 Extended Request ID: U1OnlKPOkfCNFzoV9HC5WBHJ+kfhAZDMOG0j0DzY5+jwaRFJvHkyzBacilA4FdIqDHDYWPCrywU=)"}},"status":500}
this is the code i am using to register the repository on the new cluster (taken from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore):
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/' # include https:// and trailing /
region = 'eu-west-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/es-elk-prod' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
from the logs, i get:
curl -X GET 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/_snapshot/es-mw-elk-prod/_all?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
}
],
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
},
"status" : 500
}
the ARN is able to access the S3 bucket as is the same ARN i use to snapshot the eu-west-2 domain to S3 as the eu-west-1 snapshot is stored in a sub-folder on the S3 bucket, I added a path to the code, such that:
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"path": "es-elk-prod",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
but this didn't work either.
What is the correct way to restore snapshot created in one aws region into another aws region?
Any advice is much appreciated.
I've had similar but not identical error messages about The bucket is in this region: eu-west-1. Please use this region to retry the request when moving from eu-west-1 to us-west-2.
According to Amazon's documentation (under "Migrating data to a different domain") you will need to specify an endpoint rather than a region:
If you encounter this error, try replacing "region": "us-east-2" with "endpoint": "s3.amazonaws.com" in the PUT statement and retry the request.
I would like to use the Serverless Application Model(SAM) and CloudFormation to create a simple lambda function which gets triggered when an object is created in a S3 bucket(e.g. thescore-cloudfront-trial). How do I enable the trigger from the S3 bucket to the Lambda Function? Below is my python3 boto3 code.
region = 'us-east-1'
import boto3
test_lambda_template = {
'AWSTemplateFormatVersion': '2010-09-09',
'Transform': 'AWS::Serverless-2016-10-31',
'Resources': {
'CopyS3RajivCloudF': {
'Type': 'AWS::Serverless::Function',
'Properties': {
"CodeUri": 's3://my-tmp/CopyS3Lambda',
"Handler": 'lambda.handler',
"Runtime": 'python3.6',
"Timeout": 300,
"Role": 'my_existing_role_arn'
},
'Events': {
'Type': 'S3',
'Properties': {
"Bucket": "thescore-cloudfront-trial",
"Events": 's3:ObjectCreated:*'
}
}
},
'SrcBucket': {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": 'thescore-cloudfront-trial',
}
}
}
}
conf = config.get_aws_config('development')
client = aws.client(conf, 'cloudformation', region_name=region)
response = client.create_change_set(
StackName='RajivTestStack',
TemplateBody=json.dumps(test_lambda_template),
Capabilities=['CAPABILITY_IAM'],
ChangeSetName='a',
Description='Rajiv ChangeSet Description',
ChangeSetType='CREATE'
)
response = client.execute_change_set(
ChangeSetName='a',
StackName='RajivTestStack',
)
I figured it out with caveats
Caveat 1: The trigger notification will show up in S3 console but not in the Lambda console. My existing python deploy scripts using boto3 s3 and lambda clients(which I want to replace) show the notification in both consoles.
Caveat 2: For monitoring, I see my lambda trigger only when I switch to see the lambda alias view. But I haven't specified an alias for my lambda. So I don't know why I don't see it in the non alias view(just seeing the LATEST version)
I had to modify the Events key like this:
'Events': {
'RajivCopyEvent': {
'Type': 'S3',
'Properties': {
"Bucket": {"Ref": "SrcBucket"},
"Events": "s3:ObjectCreated:*"
}
}
}