Enabling AWS Config with CloudFormation does not work - amazon-web-services

I registered my delegated administrator account for my AWS organization successfully (I get the notification that I´m the delegated admin every time when I´m at the StackSet Console).
So I should be able to enable AWS Config with the sample template provided by AWS in the whole organization. But everytime when I run the Stackset I got the following error:
Cancelled since failure tolerance has exceeded
As I couldn´t find any more log information or similar provided by AWS I´m really confused what I´m missing.
StackSet Config
Any suggestions?

In stack instances tab look for stacks that failed and debug them separately.

Related

GCP Cloud Composer Failing to Create Environment

I have been trying to create a new composer environment for several days now and keep getting the below error message. I have confirmed the service accounts used in this process have the necessary roles in IAM. This looks more like an issue with Celery based on the logs, but I am not sure what else to do here. Any recommendations on how to work through this error?
Error Message
Service Account Permissions
Additional Service Account Permissions
Log from Composer Setup

Terraform Google cloud service account issue

I'm trying to create a GKE Cluster through Terraform. Facing an issue w.r.t service accounts. In our enterprise, service accounts to be used by Terraform are created in a project svc-accnts which resides in a folder named prod.
I'm trying to create the GKE cluster in a different folder which is Dev and the project name is apigw. Thro Terraform, when I use a service account with the necessary permissions reside in the project apigw, it works fine.
But when I try to use a service account with the same permissions where the service account resides in a different folder, getting this error
Error: googleapi: Error 403: Kubernetes Engine API has not been used in project 8075178406 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=8075178406 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
where 8075178406 is the project number of svc-accnts
Why does it try to enable the API in svc-accnts when the GKE cluster is created in apigw. Are service accounts not meant to used across folders?
Thanks.
The error you provide is not about permissions of the service account. Maybe you did not change the project in the provider? Remember, you can have multiple providers of the same type (google) that point to different projects. Some code example would provide more information.
See:
https://medium.com/scalereal/how-to-use-multiple-aws-providers-in-a-terraform-project-672da074c3eb (this is for AWS, but same idea)
https://www.terraform.io/language/providers/configuration
Looks like this is a known issue and happens through gcloud cli as well.
https://issuetracker.google.com/180053712
The workaround is to enable the Kubernetes Engine API on the project(svc-accnts) and it works fine. I was hesitant to do that as I thought this might create the resources in the project.

Private AWS credentials being shared with Serverless.com?

I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.
I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.

Error submitting Cloudbuild job from Cloudfunctions if

Hopefully this is a simple one for someone with a little deeper knowledge than me...
I have a Cloudfunction that responds to webhook calls to submit jobs to Cloudbuild using the API. This works fine except that now we have some jobs that need to use KMS keys from a different project.
secrets:
- kmsKeyName: projects/xxx/locations/global/keyRings/xxx/cryptoKeys/xxx
With this included in cloudbuild.yaml the api call to submit the Cloudbuild job returns:
400 invalid build: failed to check access to "projects/xxx/locations/global/keyRings/xxx/cryptoKeys/xxx"
I've tried adding both the Cloudfunction and Cloudbuild service accounts from the calling account to the account that hosts KMS to everything I could think of, including Owner.
This article has simple and clear instructions for accessing Container Registry and other services in another account, but nothing about KMS. This error doesn't seem to trigger any meaningful results in searches, and it doesn't look familiar to me at all.
Thanks for any help.
The Cloud KMS API was not enabled on the project running Cloudbuild. It's unfortunate that the error message was so vague. In fact, I diagnosed the issue by running gcloud kms decrypt ... in a Cloudbuild job which helpfully told me that the API needed to be enabled.

Unable to subscribe to a google pub sub topic using a service account

I was trying to understand example given in google cloud samples present in this link
IAM Example
This example creates a service account, a VM, and a Pub/Sub topic. The VM runs as the service account, and the service account has subscriber access to the Pub/Sub topic, thereby giving services and applications running on the VM access to the Pub/Sub topic.
However when I am trying to deploy this example I am getting below error
The fingerprint of the deployment is a-v3HjAHciZeSLuE-vSeZw==
Waiting for create [operation-1525502430976-56b6fb6809800-dbd09909-c5d681b2]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1525502430976-56b6fb6809800-dbd09909-c5d681b2]: errors:
- code: RESOURCE_ERROR
location: /deployments/test-dp/resources/my-pubsub-topic
message: '{"ResourceType":"pubsub.v1.topic","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"message":"User
not authorized to perform this action.","status":"PERMISSION_DENIED","details":[],"statusMessage":"Forbidden","requestPath":"https://pubsub.googleapis.com/v1/projects/fresh-deck-194307/topics/my-pubsub-topic:setIamPolicy","httpMethod":"POST"}}'
It mentions that User doesn't have permission to perform this action.
I am unable to understand which user it is mentioning about.
Since I am the project owner and my account is the owner of project, I should be able to deploy a script which can set IAM policy for subscribing to a pubsub topic.
Might be my understanding is wrong above. Could somebody help to understand why this example is failing?
Also I hope if any additional configuration is needed for this example to run, it should be mentioned in README file. But there are no instructions.
Make sure that APIs for all resources that you're trying to deploy are enabled.
Use gcloud auth list command to make sure that the account with enough permissions is the active one.
Use gcloud config list command to make sure that the default project or other settings are correct.