I am getting the following error:
Cannot open this file because of an error: must
reference a valid S3 object to which you have access
Let me lay out some context that will prevent incorrect responses.
I am using an account I have had for several years.
I have tried using both the root and AdministratorAccess enabled IAM role.
I have created an account managed by this account as a parent and tried there.
I am using a CloudFormation template from AWS Well Architected Labs.
I log in to console, go to CloudFormation, select create stack and that is when I hit the blocker. Despite the error, CloudFormation CREATES the bucket and object.
Here is the really strange parts:
My colleague tried to recreate. He did not encounter same issue.
I set up a brand new account, completely separate from this one (i.e. not managed in the organization) and I could not recreate.
That is correct, when I set up a new account, completely separate, the stack would create just fine.
I thought maybe over the years, I had accumulated some bad policies or roles that were causing me issues - so I cleaned up shop right down to bare essentials. STILL getting the error.
Right now, I can just use my brand spanking new account and I don't need to worry about this. However, I am completely stumped by what is causing this issue. I cannot see any reason why this error is appearing. And I would really like to continue to use my main account which I have held for several years and has all my domains registered.
I am using Admin IAM roles and even Root accounts. I am experiencing same issue in child accounts to my parent accounts but NOT in completely separate accounts. I see no policy, role or any other restriction listed anywhere on AWS that restricts CloudFormation in any way. It is even creating buckets and objects when I click "Create Stack".
I am completely stumped and I guess I would like to be ale to understand what has corrupted my account and its ability to use this feature?
Related
We are not able to enable AWS inspector in our account in us-west-2. Our observation is that we are able to enable it in the other regions.
We use cloudformation to setup the infrastructure. Looking at the error we thought that this might be due to some conflicting stacks/stacksets in our account. So, we went ahead and deleted all those. However, even after a day, the issue still persists.
We are getting following error message -
Two state changes cannot be made at the same time. Wait till current status change completes.
Has anyone faced this issue? Is there a way to resolve this?
amazon inspector need some policies to be enabled , first
go to IAM policy
choose create new policy
choose inspector2 as the service
choose the action BatchGetAccountStatus the next
attach the new policy to your user account
if not enabled see the needed permission in inspector landing page and make this steps for add this permission
We're (mostly happily ;)) using the AWS CDK to deploy our application stack to multiple environments (e.g. production, centralized dev, individual dev).
Now we want to increase the security by applying the least privilege principle to the deployment role. As the CDK code already has all the information about which services it will touch, is there a best practice as to how to generate the role definition?
Obviously it can't be a part of the stack as it is needed to deploy the stack.
Is there any mechanism built in to the CDK (e.g. construct CloudFrontDistribution is used thus the deployment role needs to have the permission to create, update and delete CloudFrontDistributions - possibly even after the CloudFrontDistribution is mapped to only do that to that one distribution).
Any best practices as how to achieve that?
No. Sadly there isn't currently (2022-Q3) a way to have the CDK code also provide a IAM policy that would grant you access to run that template and nothing more.
However, everything is there to do it, and thanks to aspects it could probably be done relatively easily if you wanted to put in the leg work. I know many people in the community would love to have this.
You run into a chicken and an egg problem here. (We encounter a similar issue with Secret Manager and initializing secrets) pretty much the only solution I've found that works is a first time setup script that uses an SDK or the CLI to run the necessary commands for that first time setup. Then you can reference that beyond there.
However, it also depends on what roles you're taking about. Cdk deploy pretty much needs access to any given resource you may be setting up - but you can limit it through users. Your kept in a secret lock box root admin setup script can setup a single power user, that can then be used for initial cdk deploys. You can set up additional user groups that have the ability to deploy cdk or have that initial setup create a cdk role that cdk deploy can assume.
I'm spinning up an aurora serverless db cluster in my org's main account, and am attempting to access it from one of many sub-accounts for the organization, however I'm getting the error message Error: cluster arn:aws:rds:<region>:<mainaccount>:cluster:<clustername> does not belong to the calling account id <subaccount>. Previously I had connectivity working fine, when the cluster was spun up in the sub account, so this appears to be a purely IAM related issue from what I can tell.
So far I've tried:
Sharing the cluster with all accounts in the org using the AWS RAM console, which resulted in the same error. Further research revealed that RAM for DB clusters appears to be more for giving cloning access rather than query access.
Looking for ways to add policies to the cluster itself that gives the whole organization access to the resource (there doesn't appear to be any way to do this?)
The other thing I'm considering that I haven't tried yet because it feels too heavy (and probably too expensive?) to be the best way to do things:
create a role in the main acct giving all org accts access
add access to the role in the permissions of my db-access users in the sub account
every time I go to make a db call, make an STS call to get temporary credentials to the role in the main account that has DB access
use the temporary credentials to access the Data API
That seems more complex than I want it to be though so I thought I'd ask in case anyone knows of a better way--What's the best way to gain access to an aurora serverless cluster using the data api from a non-cluster-owning account in my organization?
Edit:
The statement I'm executing to access the DB has this form:
svc.ExecuteStatement(&rdsdataservice.ExecuteStatementInput{
Schema:aws.String(schema),
Database:aws.String(db),
ResourceArn:aws.String(DatabaseARN),
SecretArn:aws.String(SecretARN),
Sql: aws.String(`SELECT ... `),
})
I have create a stack, in there we create a lambda, execute some code from SDK, access to s3, write to dynamo and some other stuff, the problem now is that we are trying to deploy to a different account/region that we never deploy again, but now we are facing a lot of issues related to permissions, some of them my team already see them and are properly documented, but other cases, other teams may be facing those errors and we do not have that context, we try to go one by one as they appears but is something painful and my question is if there is a way to describe/analyze the policies that the rol that I assume has in order to execute that stack before the provisioning or how I can figure out which permission my resource needs? or basically it is go throughout all permission one by one
I'd really like something like this to exist but I do not foresee a reliable one being developed anytime soon. However, since I've been down that road myself I would suggest you something a bit more manageable.
AWS CloudFormation service role allows you to pass a role with greater permissions than the one gave to a normal user. In a nutshell, you must first create a role with some decently large permissions or even administrative permissions. Then you need to allow normal users to perform the iam:PassRole action for that resource (the role). Lastly, when you deploy a CloudFormation stack, make sure you specify the role you created as the "service role" in the stack options.
From a security standpoint there is pros and cons to both using a service role or giving a lot of different permissions to normal users. You have to assess for yourself if it's a risk you can manage.
I have a aws cognito user group configured to my serverless.yml. Whenever I do a serverless deploy, it will try to create the same user pool domain even though it already exist, hence returning me the error of:
[aws-cognito-idp-userpool] domain already exist
The only workaround is for me to delete the user pool domain every time I want to do a serverless deploy from the AWS UI. Anyone faced this issue before?
I believe there's no way to skip it,
Check this - https://github.com/serverless/serverless/issues/3183
You can try to break the serverless.yaml file into multiple files and deploy them separately for easier management,
So use the file only to create/deploy resources you need to freshly create.
The serverless.yaml will get converted into the vendor-specific Code to Infra service file,
eg. CloudFormation for AWS
Hope this helps
This is actually a CloudFormation issue vs. a Serverless issue. I ran into it in my Serverless app, BUT had my UserPool* resources independently defined in the resources section of the serverless.yml file. I changed the Domain Prefix and that requires the resource to be recreated. Here's the issue: CloudFormation always creates a resource first before deleting the old one, which blocks the new domain from being associated with the User Pool.
I've seen this behavior with other resources and the recommended behavior is to:
1. Blank out the resource from the template
2. Update the stack (deletes resource)
3. Restore the resource in template
4. Update the stack (creates a new one vs. replace).
This way you still leverage your automation tools without going to the console. Not perfect, and it'd be more preferable if there was a way to force the replacement sequence in CloudFormation. If your setup has Serverless generating the resource, then deleting via the console may be your only option.