If given the correct permissions, Lambda functions will automatically create CloudWatch log groups to hold log output from the Lambda. Same for LambdaEdge functions with the addition that the log groups are created in each region in which the LambdaEdge function has run and the name of the log group includes the name of the region. The problem is that the retention time is set to forever and there is no way to change that unless you wait for the log group to be created and then change the retention config after the fact.
To address this, I would like to create those log groups preemptively in Terraform. The problem is that the region would need to be set in the provider meta argument or passed in the providers argument to a module. I had originally thought that I could get the set of all AWS regions using the aws_regions data source and then dynamically create a provider for each region. However, there is currently no way to dynamically generate providers (see https://github.com/hashicorp/terraform/issues/24476).
Has anyone solved this or a similar problem in some other way? Yes, I could create a script using the AWS CLI to do this, but I'd really like to keep everything in Terraform. Using Terragrunt is also an option, but I wanted to see if there were any solutions using pure Terraform before I go that route.
Related
I'm attempting to use an AWS Lambda SelfManagedKafka event source mapping, as described in Using Lambda with self-managed Apache Kafka. Confluent Cloud is being used to host the Kafka cluster the event source will use.
On creation of the event source mapping (via SAM/CloudFormation template), though, the trigger is showing the following as the "Last processing result":
PROBLEM: Cluster failed to authorize Lambda.
Based on the guide, this error indicates...
that the provided user doesn't have all of the following required Kafka access control list (ACL) permissions: [list truncated]
I've verified that the listed permissions are in place for the group, cluster, and topic which leaves me to suspect that the issue has something to do with the following note:
The group name must match the event source mapping's UUID.
My tentative take on that note is that it's suggesting the group name has to match the physical ID of the event source mapping which is generated on creation. I've been unable to track down any further documentation that would elucidate that note. I'm wondering if anyone has encountered this before and can confirm?
If that is true, it would seem that the expected workflow is to create the event source mapping, get the physical ID from the output, and then create the Kafka ACLs.
After digging around further, I found at least a partial answer to this question by manually creating the event source mapping via the AWS Console. The UI enables one to optionally specify the name of the consumer group identifier, and includes the following help text for the input field:
The ID of a Kafka consumer group to join. If the consumer group ID you specify doesn't exist, or if you leave this field blank, Lambda generates a unique value.
If none is provided, the UUID of the event source mapping seems to be the value used.
As an alternative to taking that generated value and creating a consumer group in Kafka with that name, one can also add the CREATE permission on the cluster itself to the consumer's ACL. This enables Lambda to create the group on Kafka as well.
I'm trying to solve a problem with AWS IAM policies.
I need to allow certain users to only delete/modify resources that are tagged with their particular username (This I've solved) while also being able to create any new aws resource.
The part I haven't solved is need to be able to create resources without ability modifying any existing resources (unless they have the right tag).
Is there an existing AWS policy example that allows a user to create any resource (without granting delete/modify)? Is there a way to allow this without having to list every single aws offering and continuously update it for new offerings?
AdministratorAccess will give all rights to create all services.
See https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator
I managed to solve this problem with a rather ugly solution, but as far as I can tell it's the only solution.
I found a list of all aws actions: https://github.com/rvedotrc/aws-iam-reference
I then parsed out potentially troubling functions like anything with Delete or Terminate in the action name. I used vim/grep for this.
After that I broke that up into multiple aws_iam_group_policy statements. Each statement was attached to a corresponding group. The target users are then added to each of those groups.
Unfortunately, this is pretty ugly and required 5 different groups and policies, but it's the solution I arrived at.
I would like to ask about Lambda and CloudWatch logs. I see that once you create a Lambda the log structure behind is always like "/aws/lambda/".
What I would like to do is my own CloudWatch group "/myLogs" and then set to Lambda to log into this CloudWatch group. As I was reading some documentation about this it looks like that this was not possible in past, is there any update on that now is it possible to do it and how ?
Thanks.
By default, the log group gets created by the name /aws/lambda/.
I haven't explored through console, but if you are using cdk, you can use aws_logRetention method construct of aws_lambda construct.
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda/LogRetention.html
This will create a log group by the name you provide (but it will still follow the convention of /aws/lambda/....
You can set the retention days as well and manage the logs.
You can try it once.
I'm creating a bunch of application resources with AWS CloudFormation, and when the resources are created, CloudFormation adds a hash at the end of the name to make it unique.
i.e. If you wanted to create a Kinesis stream names MyStream, the actually name would be something like my-stack-MyStream-1F8ISNCLP0W4O.
I want to be able to programmatically access the resources without having to know the hash, without having to query AWS for my resources to match the names myself, and without manual steps. Does anybody know a convenient way to use AWS resources in your application programmatically and predictably?
Here are the less ideal options I can think of:
Set a tag on the resource (i.e. name -> MyStream) and query AWS to get the actual resource name.
Query AWS for a list of resources names and look for a partial match on the expected name.
After you create your resources, manually copy the actual names into your config file (probably the sanest of these options)
You can use the CloudFormation API to get a list of resources in your stack. This will give you a list of logical ids (i.e. the name in your CloudFormation template without the hash) and matching physical ids (with the stack name and hash). Using the AWS CLI, this will show a mapping between the two ids:
aws cloudformation describe-stack-resources
--query StackResources[].[LogicalResourceId,PhysicalResourceId]
--stack-name <my-stack>
CloudFormation APIs to do the same query are provided in all the various language SDKs provided by Amazon.
You can use this as an alternative to #1, by querying CloudFormation at runtime, or #3, by querying CloudFormation at buildtime and embedding the results in a config file. I don't see any advantage to using your own tags over simply querying the CF API. #2 will cause problems if you want two or more stacks from the same template to coexist.
I've used both the runtime and build time approaches. The build time approach lets you remove the dependency on or knowledge of CloudFormation, but needs stack specific information in your config file. I like the runtime approach to allow the same build to be deployed to multiple stacks and all it needs is the stack name to find all the related resources.
I'm creating a stack with CloudFormation. When you create log groups, it automatically adds prefixes and suffixes to my log group names. For example, if I try to create log group MyLogGroup, it creates log group my-stack-name-MyLogGroup-EEWJYSCJRK2V.
I understand that for a lot of use cases, this might be desired to differentiate the same resources for different stacks. However, my team has different accounts for our different stacks, so there will be no overlap. Having dynamic prefixes and suffixes makes it hard to reference log groups from static files (i.e. CloudWatch Logs agent config file).
Is there a way to make sure that resources get named EXACTLY what I put and not add a prefix or suffix?
We have run into this same issue with our AWS ecosystem and after speaking to several folks at AWS, this is by design and is not modifiable right now.
Depending on the complexity of what you are trying to do, I would recommend replacing CloudFormation with some Lambda functions to manage the resources (can be done cross account with sts:AssumeRole).
Yes its possible. For instance in our cloud formation template we create the cloud watch conf file with log_group_name and log_stream_name parameter set to combination of different parameters. Our log groups are created without prefix and postfix. See following example:
"log_group_name = "MyLogGroup\n",
"log_stream_name = {instance_id}/", "MyLogGroup", ".log\n",