Is there anyway to determine what IAM permissions I actually need for a CloudFormation template? - amazon-web-services

Just wondering whats the best practice for determining what permissions I should give for my CloudFormation template?
After some time of trying to give the minimal permissions it require, I find that thats really time consuming and error prone. I note that depending on the state of my stack, really new vs some updates vs delete, I will need different permissions.
I guess, it should be possible for there to be some parser that given a CloudFormation template can determine the minimum set of permissions it require?
Maybe I can give ec2:* access to resources tagged Cost Center: My Project Name? Is this ok? But I wonder what happens when I change my project name for example?
Alternatively, isit ok to assume its ok to give say ec2:* access based on the assumption the CloudFormation parts is usually only executed off CodeCommit/Github/CodePipeline and its not something that is likely to be public/easy to hack? --- Tho this sounds like a flawed statement to me ...

In the short term, you can use aws-leastprivilege. But it doesn't support every resource type.
For the long term: as mentioned in this 2019 re:invent talk, CloudFormation is working towards open sourcing and migrating most of its resource types to a new public resource schema. One of the benefits of this is that you'll be able to see the permissions required to perform each operation.
E.g. for AWS::ImageBuilder::Image, the schema says
"handlers": {
"create": {
"permissions": [
"iam:GetRole",
"imagebuilder:GetImageRecipe",
"imagebuilder:GetInfrastructureConfiguration",
"imagebuilder:GetDistributionConfiguration",
"imagebuilder:GetImage",
"imagebuilder:CreateImage",
"imagebuilder:TagResource"
]
},
"read": {
"permissions": [
"imagebuilder:GetImage"
]
},
"delete": {
"permissions": [
"imagebuilder:GetImage",
"imagebuilder:DeleteImage",
"imagebuilder:UnTagResource"
]
},
"list": {
"permissions": [
"imagebuilder:ListImages"
]
}
}

Related

Lambda - Is there a way to conditionally switch between Unreserved and Reserved concurrency in serverless templates?

I am using serverless template to create a lambda function in AWS.
If I don't specify any value for the property "ReservedConcurrentExecutions", then the function gets created with Unreserved concurrency.
Now, I would like to use reserved concurrency (or unreserved) depending on an input parameter.
Function with Reserved Concurrency:
"MyFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "MyFunctionHandler",
"CodeUri": "myfunction.zip",
"ReservedConcurrentExecutions" : 2,
}
}
Function with Unreserved Concurrency: (just don't use the ReservedConcurrentExecutions property)
"MyFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "MyFunctionHandler",
"CodeUri": "myfunction.zip",
}
}
I know I can declare the 2 functions separately and have a Condition to create one or the other.
What I would like to know is if it is possible to have just one function and conditionally add the ReservedConcurrentExecutions property.
Thank you!
Serverless framework does not support conditional statements and properties to resources, but you can try and use this "ifelse" plugin.

How to disable the Cross Stack creation of outputs and imports when sharing tokens accross stacks

I am working on two separate stacks (call them A and B) in the same CDK project. I generate a policy in stack B containing resource references to the ARNs of resources in stack A. CDK does this smart bit of magic that creates CfnOutputs for each of the arns shared and then uses Fn::ImportValue in stack B.
Sample TypeScript code in stack B:
const role = stackA.role;
stackB.permissionsBoundary.addStatements(
new iam.PolicyStatement({
resources: [ role.roleArn ],
actions: [
'sts:AssumeRole'
]
})
);
I do not want it to do this. It causes all kinds of grief when you need to dispose of the stacks afterwards. I just want to use literal strings in stack B that are extracted from the tokens generated in Stack A.
Please tell me there is a way to turn off this feature. It is a very, very leaky abstraction and should be optional.
Example generated template in Stack A:
"ExportsOutputFnGetAttfoo": {
"Value": {
"Fn::GetAtt": [
"bar",
"Arn"
]
},
"Export": {
"Name": "stackA:ExportsOutputFnGetAttfoo"
}
}
Example generated template in Stack B:
"Resource": [
{
"Fn::ImportValue": "stackA:ExportsOutputFnGetAttfoo"
}
I do not believe it is possible to use
literal strings in stack B that are extracted from the tokens generated in Stack A.
as the ARNs are not known at the CloudFormation template generation time since you generate the both stacks at the same time.
Cross-stack resource sharing via CF outputs is the usual method to use - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html

substitution variable $BRANCH_NAME gives nothing while building

I'm building docker images using cloud builder trigger, previously $BRNACH_NAME was working but now its giving null.
Thanks in advance.
I will post my comment as an answer as it is too long for comment section.
According to this documentation, you should have the possibility to use $BRANCH_NAME default substitution for builds invoked by triggers.
In the same documentation it is stated that:
If a default substitution is not available (such as with sourceless
builds, or with builds that use storage source), then occurrences of
the missing variable are replaced with an empty string.
I assume this might be the reason you are receiving NULL.
Have you performed any changes? Could you please provide some further information, such as your .yaml/.json file, your trigger configuration and the error you are receiving?
The problem was not in $BRANCH_NAME, I was using the resulted JSON to fetch the branch name.
like,
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"branchName": "integration"
}
}
and
I was using build_details['source']['repoSource']['branchName']
but now it's giving like
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"commitSha": "ght8939jj5jd9jfjfjigk0949jh8wh4w"
}
},
so, now I'm using build_details['substitutions']['BRANCH_NAME'] and its working fine.

Unrecognized Verify Auth Challenge Lambda response C#

Hy, I'm implementing a custom auth flow on a Cognito User Pool. I managed to handle the Define- and CreateAuthChallenge-triggers, but not the VerifyAuthChallenge.
I use this documentation as a guide: Verify Auth Challenge Response Lambda Trigger
I take the verify-lambda input and add answerCorrect = true to the response, as described in the documentation. Define- and CreateChallenge-parts work as expected with the given information. Verifying the challenge answers, I get InvalidLambdaResponseException: Unrecognizable lambda output as a response. The verify-lambda exists successfully, returning this object:
{
"version": 1,
"triggerSource": "VerifyAuthChallengeResponse_Authentication",
"region": "eu-central-1",
"userPoolId": "eu-central-1_XXXXXXXXX",
"callerContext": {
"awsSdkVersion": "aws-sdk-dotnet-coreclr-3.3.12.7",
"clientId": "2490gqsa3gXXXXXXXXXXXXXXXX"
},
"request": {
"challengeAnswer": "{\"DeviceSub\":\"TestSub\"}",
"privateChallengeParameters": {
"CUSTOM_CHALLENGE": "SessionService_SendDevice"
},
"userAttributes": {
"sub": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX",
"email_verified": "true",
"cognito:user_status": "CONFIRMED",
"email": "X.XXXXXXXX#XXXXXXXXXX.de"
}
},
"response": {
"answerCorrect": true
},
"userName": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX"
}
Before, I ran into the problem, that the "challengeAnswer"-part was described as a Dictionary in the documentation, but it actually is just a string, containing the dictionary as json. Sadly, I cannot find any information anywhere for why the returned object isn't accepted by Cognito.
Apparently someone had the same problem as me, using JavaScript: GitHub link
Can anyone tell me, what the response object should look like, so that it is accepted by Cognito? Thank you.
Well, so my mistake was to not consider the custom authentication flow. I found a different documentation, which is by the way the one you should definitely use:
Customizing your user pool authentication flow
I ran into 2 wrong parts in the documentation here (the triggers sub-pages) and 1 error on my part.
Wrong part 1:
DefineAuthChallenge and CreateAuthChallenge inputs for the session is defined as a list of challenge results. This is all fine, but the challenge result object has the challenge metadata part wrongly displayed of being written like this: "ChallengeMetaData", when instead it should be "ChallengeMetadata", with a lower case "d" for "data" instead of an upper case one. This gave me the "Unrecognized lambda output"-error, because "ChallengeMetaData" wasn't what the backend was expecting, it was looking for "ChallengeMetadata", which wasn't present. The first time you enter the define auth challenge lambda, this error doesn't show up, because the session doesn't contain any challenge answers. The moment you verify a challenge though, this gets filled and then the uppercase d gives you troubles.
Wrong part 2:
As described in my question, the VerifyAuthChallenge input for the "challengeAnswer" is a string, not a Dictionary.
All these wrong parts are correctly displayed on the first documentation page I linked here. So I would recommend using that instead of the other documentation.
Error on my side:
I didn't really check what happens after you verify a custom challenge via the VerifyAuthChallenge-trigger. In the given link, in the image above the headline 'DefineAuthChallenge: The challenges (state machine) Lambda trigger', it clearly states, that after verifying the response, the DefineAuthChallenge trigger is invoked again, which I didn't consider.
I hope I could save someone the time it took for me to figure this out with this :-)

How can I install the sample AdventureWorksDW database on SQL DW using an ARM script

I can create a SQL DW using ARM no problem. However, the portal supports an option of also installing a sample database - e.g. AdventureWorksDW. How can I do the equivalent using an ARM script?
BTW, I clicked on "automation options" on the portal add it shows an ARM script with an extension that probably is the piece that installs the sample database, but it asks for some parameters (e.g. storageKey, storageUri) that I don't know.
Here's what I think is the relevant portion of the ARM JSON:
"name": "PolybaseImport",
"type": "extensions",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[concat('Microsoft.Sql/servers/', parameters('serverName'), '/databases/', parameters('databaseName'))]"
],
"properties": {
"storageKeyType": "[parameters('storageKeyType')]",
"storageKey": "[parameters('storageKey')]",
"storageUri": "[parameters('storageUri')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"operationMode": "PolybaseImport"
}
More specifically, looking at the ARM deploy script generated from the portal, here are the key elements that I need to know in order to auto deploy using my own ARM script:
…
"storageKey": {
"value": null  <- without knowing this, I can’t deploy.
},
"storageKeyType": {
"value": "SharedAccessKey"
},
"storageUri": {
"value": https://sqldwsamplesdefault.blob.core.windows.net/adventureworksdw/AdventureWorksDWPolybaseImport/Manifest.xml  <- this is not a public blob, so can’t look at it
},
…
AFAIK that's currently not possible. The portal kicks off a workflow that provisions the new DW resources, generates the sample DW schema then loads data. The sample is stored in a non-public blob so you won't be able to access it.
I don't think it's hard to make it available publicly but it does take some work so perhaps you should add a suggestion here: https://feedback.azure.com/forums/307516-sql-data-warehouse