Permissions for AWS Step Function tutorial - amazon-web-services

I am new to AWS step functions and was trying to follow this tutorial https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-human-approval.html#human-approval-yaml but I am getting permission errors on Step#2 when importing YML template in cloud formation service.
"The following resource types are not supported for resource import: AWS::ApiGateway::Account,AWS::Lambda::Permission,AWS::StepFunctions::StateMachine"
Our AWS admin granted me the following permissions:
IAM Full Access
SNS Full Access
STS_AssumeRole *
Lambda_FullAccess
AWS StepFunction_FullAccess
APIGatewayAdministrator (Equals Full Access)
He also said that the following services are used:
ApiGateway::RestApi
ApiGateway::Resource
ApiGateway::Method
ApiGateway::Account
ApiGateway::Stage
ApiGateway::Deployment
IAM::Role
Partition
Lambda::Function
Lambda::Permission
StepFunctions::StateMachine
APIGatewayEndpoint
SNS::Topic
But I am still unable to import the YML template from tutorial.
What's missing?
enter image description here
Thank you

After relogging in and attempting again the solution appeared to work.
The error indicates that it was initially tried using "Import Resources", whereas it was new resources being created.

Related

Unable to deploy google cloud functions

I tried to deploy a cloud function on the google cloud platform using the my console. The command I used was,
gcloud functions deploy function_name --runtime=python37 --memory=1024MB --region=asia-northeast1 --allow-unauthenticated --trigger-http
But I am getting this error,
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: could not resolve storage source: googleapi: Error 404: Not Found, notFound
I tried googling around but it seems like no one had faced this error message before. I have also tried changing project and deployment is working fine.
gcloud config set project another_project
Appreciate it if anyone have any idea on what is causing this error and how I can solve it. Thanks!
As per the documentation here -
https://cloud.google.com/functions/docs/building
it says that : Because Cloud Storage is used directly in your project, the source code directory for your functions is visible, in a bucket named:
gcf-sources-<PROJECT_NUMBER>-<REGION>
Therefore, if you delete the bucket in cloud storage, then you need to re create this bucket.
For example if your project number is 123456789 running on asia-south1 then the bucket name should be:
gcf-sources-123456789-asia-south1
Once you re create the bucket then you can use gcloud or firebase cli to deploy and it should work normally.
Hope this helps. It worked for me!
Enjoy!
Please check if a bucket named gcf-sources-**** is available.
If not, you will need to contact gcloud support to request that particular bucket to be rebuiled.
Update:
https://issuetracker.google.com/175866925
Update from GCP Support: That does not resolve my problem at all.
First they said they need to recreate that bucket. Meanwhile they said that does not resolve the problem and they are still investigating the problem.
Just for testing I created that Bucket my self as Oru said.
Still the same error. I will update this tread when I got new information.

GCP Cloud Build fails with permissions error even though correct role is granted

I setup a Cloud Build Trigger in my GCP project in order to deploy a Cloud Function from a Cloud Source Repository via a .yaml file. Everything seems to have been setup correctly and permissions granted according to the official documentation, but when I test the trigger by running it manually, I get the following error:
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[Missing necessary permission iam.serviceAccounts.actAs for on resource [MY_SERVICE_ACCOUNT]. Please grant the roles/iam.serviceAccountUser role. You can do that by running 'gcloud iam service-accounts add-iam-policy-binding [MY_SERVICE_ACCOUNT] --member= --role=roles/iam.serviceAccountUser']
Now first of all, running the suggested command doesn't even work because the suggested syntax is bad (missing a value for "member="). But more importantly, I already added that role to the service account the error message is complaining about. I tried removing it, adding it back, both from the UI and the CLI, and still this error always shows.
Why?
I figured it out after a lot of trial and error. The documentation seems to be incorrect (missing some additional necessary permissions). I used this answer to get me there.
In short, you also need to add the cloudfunctions.developer and iam.serviceAccountUser roles to the [PROJECT_NUMBER]#cloudbuild.gserviceaccount.com account, and (I believe) that the aforementioned cloudbuild service account also needs to be added as a member of the service account that has permissions to deploy your Cloud Function (again shown in the linked SO answer).
The documentation really should be reflecting this.
Good luck!

google storage transfer service account does not exist in new project

I am trying to create resources using Terraform in a new GCP project. As part of that I want to set roles/storage.legacyBucketWriter to the Google managed service account which runs storage transfer service jobs (the pattern is project-[project-number]#storage-transfer-service.iam.gserviceaccount.com) for a specific bucket. I am using the following config:
resource "google_storage_bucket_iam_binding" "publisher_bucket_binding" {
bucket = "${google_storage_bucket.bucket.name}"
members = ["serviceAccount:project-${var.project_number}#storage-transfer-service.iam.gserviceaccount.com"]
role = "roles/storage.legacyBucketWriter"
}
to clarify, I want to do this so that when I create one off transfer jobs using the JSON APIs, it doesn't fail prerequisite checks.
When I run Terraform apply, I get the following:
Error applying IAM policy for Storage Bucket "bucket":
Error setting IAM policy for Storage Bucket "bucket": googleapi:
Error 400: Invalid argument, invalid
I think this is because the service account in question does not exist yet as I can not do this via the console either.
Is there any other service that I need to enable for the service account to be created?
it seems I am able to create/find the service account once I run this:
https://cloud.google.com/storage/transfer/reference/rest/v1/googleServiceAccounts/get
for my project to get the email address.
not sure if this is the best way but it works..
Soroosh's reply is accurate, after querying the API as per this DOC: https://cloud.google.com/storage-transfer/docs/reference/rest/v1/googleServiceAccounts/ will enable the service account and terraform will run, but now you have to create an api call in terraform for that to work, ain't nobody got time for that.

How to create an AWS IAM policy/role or whatever to allow import of an ova file

I'm trying to import a VM created on VirtualBox. It was exported as an OVA file. I have gone to a number of forums to try to find out how to get around this error.
An error occurred (InvalidParameter) when calling the ImportImage operation: The service role <vmimport> does not exist or
does not have sufficient permissions for the service to continue
I have used the console to instead of the aws-cli to perform the steps described here: http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
However I am still getting the error. Can anyone guide me on how to troubleshoot?
The command to import looks like this;
aws ec2 import-image --description "Bitnami WordPress CiviCRM" --license-type BYOL --disk-containers file://containers.jso
n
My .aws/credentials and config files are set up correctly as I can perform other CLI functions but I am stumped on how to associate the vmimport role with the IAM user.
Never be afraid to start over, which is what I did. I found another "cookbook" to do this and followed the instructions. I was able to import the image but the it did not convert. The message was "ClientError: Unknown OS / Missing OS files".
I'll submit another question on this if I can't figure it out.

Why do I get access denied when trying to list the content of a bucket in a different project than my dataflow job?

I have two different project, A and B. In A I have a google cloud function running that triggers on messages on a pubsub topic and creates a dataflow job. This dataflow jobs list and reads the items from a specific bucket in B, and this is where my problem starts.
I have followed the instructions here: https://cloud.google.com/dataflow/security-and-permissions#accessing-cloud-storage-buckets-across-cloud-platform-projects regarding ACL and I can see that my project user has been added as OWNER to the bucket I try to read from.
The error message I get is:
403 Forbidden\n{\n \"code\" : 403,\n \"errors\" : [ {\n \"domain\" : \"global\",\n \"message\" : \"Caller does not have storage.objects.list access to bucket bucketName.\",\n \"reason\" : \"forbidden\"\n } ],\n \"message\" : \"Caller does not have storage.objects.list access to bucket bucketName.\"\n}
Why doesn't the function have list access when project A has OWNER rights of the bucket on B. Does the cloud function runs with a different set of credentials than those used it the linked tutorial?
If I trigger it manually from the cli it works as expected, but then it probably uses my credentials I guess.
There are 2 things here. Are you listing the files from the cloud function and than launching the dataflow job on the files? If yes, please see that the user/service account under which your cloud function is triggered has correct permission on the bucket. If no, please make sure both service accounts (cloudservices and compute engine, mentioned at https://cloud.google.com/dataflow/security-and-permissions#accessing-cloud-storage-buckets-across-cloud-platform-projects) of project A has the OWNERs permissions on project B.