I've defined a user and a managed policy in CDK v2, similar to:
const policy = new iam.ManagedPolicy(this, `s3access`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['s3:PutObject', 's3:GetObject'],
resources: ['*']
})
]
})
const someUser = new iam.User(this, 'some-user', { managedPolicies: [policy] });
I want to test that the user has the managed policy applied to it using CDK test assertions, however I'm struggling to figure out how using the existing test constructs:
template.hasResourceProperties('AWS::IAM::ManagedPolicy', {
PolicyDocument: Match.objectLike({
Statement: [
{
Action: ['s3:PutObject', 's3:GetObject'],
Effect: 'Allow',
Resource: [
'*'
]
},
]
})
})
...matches the managed policy, but doesn't test that the user has the managed policy applied.
What is the pattern / best practice for doing this?
You need to match the User's Managed Policy Arn as it appears in the template:
"Type": "AWS::IAM::User",
"Properties": {
"ManagedPolicyArns": [
{
"Ref": "s3access10922181"
}
]
The trick is to get the {"Ref": "s3access10922181"} reference to the policy. Here are two equivalent approaches:
Approach 1: stack.node.tryFindChild
const managedPolicyChild = stack.node.tryFindChild('s3access') as iam.ManagedPolicy | undefined;
if (!managedPolicyChild) throw new Error('Expected a defined ManagedPolicy');
const policyArnRef = stack.resolve(managedPolicyChild.managedPolicyArn);
template.hasResourceProperties('AWS::IAM::User', {
ManagedPolicyArns: Match.arrayWith([policyArnRef]),
});
Approach 2: template.findResources
const managedPolicyResources = template.findResources('AWS::IAM::ManagedPolicy');
const managedPolicyLogicalId = Object.keys(managedPolicyResources).find((k) => k.startsWith('s3access'));
if (!managedPolicyLogicalId) throw new Error('Expected to find a ManagedPolicy Id');
template.hasResourceProperties('AWS::IAM::User', {
ManagedPolicyArns: Match.arrayWith([{ Ref: managedPolicyLogicalId }]),
});
Related
Creating a appflow from S3 bucket to salesforce through CDK with upsert option.
Using existing connection to From S3 to Salesforce -
new appflow.CfnConnectorProfile(this, 'Connector',{
"connectionMode": "Public",
"connectorProfileName":"connection_name",
"connectorType":"Salesforce"
})
Destination flow Code -
new appflow.CfnFlow(this, 'Flow', {
destinationFlowConfigList: [
{
"connectorProfileName": "connection_name",
"connectorType": "Salesforce",
"destinationConnectorProperties": {
"salesforce": {
"errorHandlingConfig": {
"bucketName": "bucket-name",
"bucketPrefix": "subfolder",
},
"idFieldNames": [
"ID"
],
"object": "object_name",
"writeOperationType": "UPSERT"
}
}
}
],
..... other props ....
}
tasks: [
{
"taskType":"Filter",
"sourceFields": [
"ID",
"Some other fields",
...
],
"connectorOperator": {
"salesforce": "PROJECTION"
}
},
{
"taskType":"Map",
"sourceFields": [
"ID"
],
"taskProperties": [
{
"key":"SOURCE_DATA_TYPE",
"value":"Text"
},
{
"key":"DESTINATION_DATA_TYPE",
"value":"Text"
}
],
"destinationField": "ID",
"connectorOperator": {
"salesforce":"PROJECTION"
}
},
{
.... some other mapping fields.....
}
But the problem is - "Invalid request provided: AWS::AppFlow::FlowCreate Flow request failed: [ID does not exist in the destination conne ctor]
According to the error, how to fix the problem with the existing connector which results in ID does not exist in the destination connector
PS: ID is defined in the flow code. But still it is saying ID is not found.
I think your last connector operator should be:
"connectorOperator": {
"salesforce":"NO_OP"
}
instead of:
"connectorOperator": {
"salesforce":"PROJECTION"
}
since you are mapping the field ID into itself without any transformations whatsoever.
Schema snippet:
type Query {
getPCSData(country: $String, year: $String, payCycle: $String): PCSDataOutput
}
Dynamodb Resolver:
PCSDataResolver:
Type: AWS::AppSync::Resolver
DependsOn: PCSGraphQLSchema
Properties:
ApiId:
Fn::GetAtt: [PCSGraphQLApi, ApiId]
TypeName: Query
FieldName: getPCSData
DataSourceName:
Fn::GetAtt: [PCSGraphQLDDBDataSource, Name]
RequestMappingTemplate: |
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"Country_Year_PayCycle": ?????????
}
}
ResponseMappingTemplate: "$util.toJson($ctx.result)"
Here, I am looking to form key Country_Year_PayCycle using all three arguments passed from the query something like this Country_Year_PayCycle = country+ year+payCycle
Is it possible to concat the query arguments and form key ?
This is how I resolve
RequestMappingTemplate: |
## Set up some space to keep track of things we're updating **
#set($concat ="_")
#set($country = $ctx.args.country )
#set($year = $ctx.args.year )
#set($payCycle = $ctx.args.payCycle )
#set($pk = "$country$concat$year$concat$payCycle")
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"Country_Year_PayCycle": $util.dynamodb.toDynamoDBJson($pk)
}
}
Let's suppose that I have a bucket with many folders and objects.
This bucket has Objects can be public as policy access. If I want to know if there is at least one public object or list all public objects, how should I do this? Is there any way to do this automatically?
It appears that you would need to loop through every object and call GetObjectAcl().
You'd preferably do it in a programming language, but here is an example with the AWS CLI:
aws s3api get-object-acl --bucket my-bucket --key foo.txt
{
"Owner": {
"DisplayName": "...",
"ID": "..."
},
"Grants": [
{
"Grantee": {
"DisplayName": "...",
"ID": "...",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
}
I granted the READ permission by using Make Public in the S3 management console. Please note that objects could also be made public via a Bucket Policy, which would not show up in the ACL.
Use this method from AWS SDK to do this with JavaScript:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property. There is an SDK for every major language like Java etc. Use the one that you know.
var params = {
Bucket: "examplebucket",
MaxKeys: 2
};
s3.listObjectsV2(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
// bucket isnt empty
if (data.length != 0)
console.log(data); // successful response
}
/*
data = {
Contents: [
{
ETag: "\"70ee1738b6b21e2c8a43f3a5ab0eee71\"",
Key: "happyface.jpg",
LastModified: <Date Representation>,
Size: 11,
StorageClass: "STANDARD"
},
{
ETag: "\"becf17f89c30367a9a44495d62ed521a-1\"",
Key: "test.jpg",
LastModified: <Date Representation>,
Size: 4192256,
StorageClass: "STANDARD"
}
],
IsTruncated: true,
KeyCount: 2,
MaxKeys: 2,
Name: "examplebucket",
NextContinuationToken: "1w41l63U0xa8q7smH50vCxyTQqdxo69O3EmK28Bi5PcROI4wI/EyIJg==",
Prefix: ""
}
*/
});```
Building off of John's answer, you might find this helpful:
import concurrent.futures
import boto3
BUCKETS = [
"TODO"
]
def get_num_objs(bucket):
num_objs = 0
s3_client = boto3.client("s3")
paginator = s3_client.get_paginator("list_objects_v2")
for res in paginator.paginate(
Bucket=bucket,
):
if "Contents" not in res:
print(f"""No contents in res={res}""")
continue
num_objs += len(res["Contents"])
return num_objs
for BUCKET in BUCKETS:
print(f"Analyzing bucket={BUCKET}...")
num_objs = get_num_objs(BUCKET)
print(f"BUCKET={BUCKET} has num_objs={num_objs}")
# if num_objs > 10_000:
# raise Exception(f"num_objs={num_objs}")
s3_client = boto3.client("s3")
def assert_no_public_obj(res):
if res["ResponseMetadata"]["HTTPStatusCode"] != 200:
raise Exception(res)
if "Contents" not in res:
print(f"""No contents in res={res}""")
return
print(f"""Fetched page with {len(res["Contents"])} objs...""")
for i, obj in enumerate(res["Contents"]):
if i % 100 == 0:
print(f"""Fetching {i}-th obj in page...""")
res = s3_client.get_object_acl(Bucket=BUCKET, Key=obj["Key"])
for grant in res["Grants"]:
# Amazon S3 considers a bucket or object ACL public if it grants any permissions to members of the predefined AllUsers or AuthenticatedUsers groups.
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html#access-control-block-public-access-policy-status
uri = grant["Grantee"].get("URI")
if not uri:
continue
if "AllUsers" in uri or "AuthenticatedUsers" in uri:
raise Exception(f"""Grantee={grant["Grantee"]} found for {BUCKET}/{obj["Key"]}""")
paginator = s3_client.get_paginator("list_objects_v2")
with concurrent.futures.ThreadPoolExecutor() as executor:
for res in paginator.paginate(
Bucket=BUCKET,
):
executor.submit(assert_no_public_obj, res)
I have created a parameter:
Parameters:
..
list:
Description: "Provide a list .."
Type: CommaDelimitedList
Default: "test1, test2"
Now I want to reference this list (which will resolve in "test1", "test2", ..) from a file in my cloudformation which looks like this:
configure_xx:
files:
/etc/file.conf:
content: !Sub |
input {
logs {
log_group => [ "${list}" ]
access_key_id => "${logstashUserKey}"
secret_access_key => "${logstashUserKey.SecretAccessKey}"
region => "eu-west-1"
}
}
How can I make this work for the parameter list? (the keys work).
error: Fn::Sub expression does not resolve to a string
Just switch the parameter type for a "String"
Parameters:
..
list:
Description: "Provide a list .."
Type: String
Default: "test1, test2"
If, for some reason, you have no control over this parameter type, you could use Fn::Join to transform the list to a string. For exemple:
configure_xx:
files:
/etc/file.conf:
content:
Fn::Sub:
- |-
input {
logs {
log_group => [ "${joinedlist}" ]
access_key_id => "${logstashUserKey}"
secret_access_key => "${logstashUserKey.SecretAccessKey}"
region => "eu-west-1"
}
}
- joinedlist:
Fn::Join:
- ', '
- !Ref list
Im using nested stack to create ELB and application stacks...And i need to pass list of subnets to ELB and Application stack...
And the main json has the below code...
"Mappings":{
"params":{
"Subnets": {
"dev":[
"subnet-1”,
"subnet-2”
],
"test":[
"subnet-3”,
"subnet-4”,
"subnet-5”,
"subnet-6”
],
"prod":[
"subnet-7”,
"subnet-8”,
"subnet-9”
]
}
}
},
"Parameters":{
"Environment":{
"AllowedValues":[
"prod",
"preprod",
"dev"
],
"Default":"prod",
"Description":"What environment type is it (prod, preprod, test, dev)?",
"Type":"String"
}
},
Resources:{
"ELBStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": {
"Fn::Join":[
"",
[
"https://s3.amazonaws.com/",
"myS3bucket",
"/ELB.json"
]
]
},
"Parameters": {
"Environment":{"Ref":"Environment"},
"ELBSHORTNAME":{"Ref":"ELBSHORTNAME"},
"Subnets":{"Fn::FindInMap":[
"params",
"Subnets",
{
"Ref":"Environment"
}
]},
"S3Bucket":{"Ref":"S3Bucket"},
},
"TimeoutInMinutes": "60"
}
}
now when i run this json using lambda or cloudformation i get the below error under cloudformation Events Tab....
CREATE_FAILED AWS::CloudFormation::Stack ELBStack Value of property Parameters must be an object with String (or simple type) properties
using below lambda
import boto3
import time
date = time.strftime("%Y%m%d")
time = time.strftime("%H%M%S")
stackname = 'FulfillSNSELB'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname + '-' + date + '-' + time),
TemplateURL='https://s3.amazonaws.com/****/**/myapp.json',
Parameters=[
{
'ParameterKey': 'Environment',
'ParameterValue': 'dev',
'UsePreviousValue': False
}]
)
def lambda_handler(event, context):
return(response)
You can't pass a list to a nested stack. You have to pass a concatenation of items with the intrinsic function Join like this: !Join ["separator", [item1, item2, …]].
In the nested stack, the type of the parameter needs to be List<Type>.
Your JSON is not well-formed. Running your JSON through aws cloudformation validate-template (or even jsonlint.com) quickly reveals several basic syntax errors:
Resources:{ requires the key to be surrounded by quotes: "Resources": {
Some of your quotation marks are invalid 'smart-quotes' "subnet-1”, that need to be replaced with standard ASCII quotes: "subnet-1",
(This is the one your error message refers to) The "Properties" object in your "ELBStack" resource "S3Object: {"Ref: "S3Bucket"}," has a trailing comma after its last element that needs to be removed: "S3Object: {"Ref: "S3Bucket"}"