AWS CDK - compile-time programmatic extraction of account ID from ARN string - amazon-web-services

I have the ARN of a downstream resource of an external AWS account. My infrastructure code is in AWS CDK. In my code, I want to extract the accountId from the ARN. How do I do that?

It can be elegantly done using the core cdk library. Here's the solution:
import { Arn } from 'monocdk';
import arn = require("monocdk/lib/core/lib/arn");
private static getAccountIdFromArn(arn: string): string {
const arnComponents = Arn.parse(arn)
if(undefined === arnComponents.account) {
throw new Error(`account id not present in the arn #{arn}!`)
}
return arnComponents.account
}

Related

How to : terraform snowflake stage credentials and use AWS IAM role arn

I am trying to Terraform snowflake_stage and use the arn from the IAM role, that was also terraformed, as the credential.
The Snowflake SQL works when I use:
create stage dev
URL='s3://name_of_bucket/'
storage_integration = dev_integration
credentials=(AWS_ROLE='arn:aws:iam:999999999999:role/service-role-name')
encryption=(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')
FILE_FORMAT=DATABASE.PUBLIC.SCHEMA.FORMAT_NAME
COPY_OPTION=(ON_ERROR='CONTINUE' PURGE='FALSE' RETURN_FAILED_ONLY='TRUE');
but when I try to write an equivalent Terraform resource "snowflake_stage" using:
resource "snowflake_stage" "stage" {
name = "dev"
url = "s3://name_of_bucket/"
storage_integration = "dev_integration"
schema = "public"
credentials = "AWS_ROLE='aws_iam_role.snowflake_stage.arn'"
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')
file_format = "DATABASE.PUBLIC.SCHEMA.FORMAT_NAME"
copy_options = "(ON_ERROR='CONTINUE' PURGE='FALSE' RETURN_FAILED_ONLY='TRUE')"
}
I get :
SQL compilation error: invalid value [Not a property list: TOK_LIST] for parameter '{1}
The value on the encryption seems to need the "AWS_ROLE='..'" to be valid.
I've tried just using :
credentials = aws_iam_role.snowflake_stage.arn
but got a different set of errors.
How do I combine the :
credentials = "AWS_ROLE='
with the
aws_iam_role.snowflake_stage.arn
then append the :
`)"
for the credentials value ?
First, you are missing closing " in encryption. It should be:
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')"
Second, for the role:
credentials = "AWS_ROLE='${aws_iam_role.snowflake_stage.arn}'"
A bit late on this, but encryption should be:
encryption = "TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key'"
rather than:
encryption = "(TYPE='AWS_SSE_KMS' KMS_KEY_ID='aws/key')"
Moreover using storage integration on it's own will be fine as long as you configure it with the appropriate role and permission on the role ( S3, KMS, and STS policy document)
Then you can get rid of encryption and credentials field.

Rejected to connect from lambda to cognito due to not authorized

If CognitoIdentityProvider::Client doesn't provide access_key_id and secret_access_key, it will retrieve from the ENV by default.
The strange thing is it works at a totally new lambda but not in another.
client = Aws::CognitoIdentityProvider::Client.new(region: ENV['AWS_REGION'])
client.admin_get_user({
user_pool_id: Jets.application.config.cognito[:user_pool_id],
username: 'username'
})
I would get this error message but had no idea where can I set the policy for cognito.
"errorMessage": "User: arn:aws:sts::123123123123:assumed-role/firstage-api-stag-IamRole-1DYOOEVSCURMY/xxxxxx-api-stag-mes_controller-show is not authorized to perform: cognito-idp:AdminGetUser on resource: arn:aws:cognito-idp:ap-northeast-2:319924209672:userpool/ap-northeast-2_0cSCFMK4r",
"errorType": "Function<Aws::CognitoIdentityProvider::Errors::AccessDeniedException>",
I would like to use the default key_id and access_key in the lambda ENV rather than IAM user.
What should I do???
Best practice for Lambda functions is to ensure your IAM role associated with the Lambda function has the policy to let it invoke the specific AWS Service. Do not hard code creds in Lambda functions.
For some services, you need to write a custom policy. For example, for Pinpoint voice, you need to write a custom policy using JSON so the Lambda function can invoke that service.
For CognitoIdentityProvider, try using this policy for your IAM role:
When in doubt which services your IAM role can invoke - check the Access Advisor tab:
I just tested this use case and built a Lambda function that creates a CognitoIdentityProviderClient object and gets users in a specific user pool. I implemented this using the Lambda Java runtime API, but it does not matter what supported Lambda programming language you use. You still need to configure your IAM role in the same way.
This code works perfectly. Here is the Handler class:
public class Handler implements RequestHandler<Map<String,String>, String> {
#Override
public String handleRequest(Map<String,String> event, Context context) {
LambdaLogger logger = context.getLogger();
String pool = event.get("Pool");
logger.log("pool: " + pool);
CognitoInfo cog = new CognitoInfo();
String xml = cog.getUsers(pool);
logger.log("XML: " + xml);
return xml;
}
}
Here is a method named getUsers located in the CognitoInfo class that invokes the AWS Service:
public String getUsers( String userPoolId) {
CognitoIdentityProviderClient cognitoclient = CognitoIdentityProviderClient.builder()
.region(Region.US_EAST_1)
.build();
try {
ArrayList<String> userList = new ArrayList();
// List all users
ListUsersRequest usersRequest = ListUsersRequest.builder()
.userPoolId(userPoolId)
.build();
ListUsersResponse response = cognitoclient.listUsers(usersRequest);
for(UserType user : response.users()) {
userList.add(user.username());
}
return convertToString(toXml(userList));
} catch (CognitoIdentityProviderException e){
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return "";
}
The Lambda function is configured to use an IAM role named lambda-support2 that has the AmazonCognitoPowerUser policy.
This Lambda function successfully retrieves the users in a specific user pool:
Make sure that all your Lambda functions have the proper IAM role configured.

How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform?

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

How to get the AWS IoT custom endpoint in CDK?

I want to pass the IoT custom endpoint as an env var to a lambda declared in CDK.
I'm talking about the IoT custom endpoint that lives here:
How do I get it in context of CDK?
You can ref AWS sample code:
https://github.com/aws-samples/aws-iot-cqrs-example/blob/master/lib/querycommandcontainers.ts
const getIoTEndpoint = new customResource.AwsCustomResource(this, 'IoTEndpoint', {
onCreate: {
service: 'Iot',
action: 'describeEndpoint',
physicalResourceId: customResource.PhysicalResourceId.fromResponse('endpointAddress'),
parameters: {
"endpointType": "iot:Data-ATS"
}
},
policy: customResource.AwsCustomResourcePolicy.fromSdkCalls({resources: customResource.AwsCustomResourcePolicy.ANY_RESOURCE})
});
const IOT_ENDPOINT = getIoTEndpoint.getResponseField('endpointAddress')
AFAIK the only way to recover is by using Custom Resources (Lambda), for example (IoTThing): https://aws.amazon.com/blogs/iot/automating-aws-iot-greengrass-setup-with-aws-cloudformation/

Terraform - AWS IAM user with Programmatic access

I'm working with aws via terraform.
I'm trying to create an IAM user with Access type of "Programmatic access".
With the AWS management console this is quite simple:
When trying with Terraform (reference to docs) it seems that only the following arguments are supported:
name
path
permissions_boundary
force_destroy
tags
Maybe this should be configured via a policy?
Any help will be appreciated.
(*) Related question with different scenario.
You can use aws_iam_access_key (https://www.terraform.io/docs/providers/aws/r/iam_access_key.html) terraform resource to create Access keys for the user and that should imply that user has Programmatic Access.
Hope this helps.
The aws_iam_user resource needs to also have an aws_iam_access_key resource created for it.
The iam-user module has a comprehensive example of using it.
You could also use that module straight from the registry and let that do everything for you.
If you dont want to encrypt and just looking for Access key & Secret key into plain text you can use this
main.tf
resource "aws_iam_access_key" "sagemaker" {
user = aws_iam_user.user.name
}
resource "aws_iam_user" "user" {
name = "user-name"
path = "/"
}
data "aws_iam_policy" "sagemaker_policy" {
arn = "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess"
}
resource "aws_iam_policy_attachment" "attach-policy" {
name = "sagemaker-policy-attachment"
users = [aws_iam_user.user.name]
policy_arn = data.aws_iam_policy.sagemaker_policy.arn
}
output.tf
output "secret_key" {
value = aws_iam_access_key.user.secret
}
output "access_key" {
value = aws_iam_access_key.user.id
}
you will get the Access key and secret key into the plain text you can directly use it.