We have a very complex api-gateway api that has been constructed manually through the console. I would like to create a cloudformation template from this existing api, so it can be managed instead in code.
The "create stack from existing resources" seems to require all of the resources to be pre-defined in a template. However this is exactly the bit I'm trying to avoid. Due to the complexity of the existing api, it would take a very long time to manually work through all of the api resources to create all the definitions in a template.
Is there some way I can have CloudFormation automatically scan through the existing api resources and create the template definitions from it?
There is popular open-source tool called former2 which can generated CFN templates from existing resources.
Other then former2 there is nothing (CloudFormer is not maintained nor supported by AWS anymore). You would have to manually per-populate entire template before importing resources to CloudFormation.
Related
I'm working with a CloudFormation template which is defining a lot of parameters for static values out of the scope of the template.
For example, the template is creating some EC2, and it has parameters for each VPC subnet. If this was Terraform, I would just remove all of these parameters and use data to fetch the information.
Is it possible to do that with CloudFormation?
Notice that I'm not talking about referencing another resource created within the same template, but about a resource that already exists in the account that could have been created by different means (manual, Terraform, CloudFormation, whatever...)
No, CloudFormation does not have any native ability to look up existing resources. You can, however, achieve this using a Cloudformation macro.
A CloudFormation macro leverages a lambda function, which you can implement with whatever logic you need (e.g. using boto3) so that it returns the value you're after. You can even pass parameters to it.
Once the macro has been created, you can then consume it in your existing template.
You can find a full example on how to implement a macro, and on how to consume it, here: https://stackoverflow.com/a/70475459/3390419
Goal:
I need to create an AWS ManagedPolicy that contains ALLOW permissions for API actions on resources created in a pre existing stack. No I cannot modify the existing stack template and simply add a policy to it. I need to create a new stack that deploys a policy that enables actions on the existing stacks resources
Solution:
Create a CDK project to generate and deploy this policy stack. Within this CDK project I want to load the existing stack and iterate over its resources adding permissions to my new stack's policy.
Problem:
I don't see any way to load an existing stack in CDK. I was hunting around for a "Stack.fromArn(...)" but don't see anything even similar.
Question:
Is there some obsucre way to do this? Or is it simply not supported?
I did not tried it, however it looks like if you can access/lookup at least one construct from the existing stack, you can use the method Stack.of(construct) https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_core.Stack.html#static-ofconstruct to lookup the first stack scope in which the construct is defined. Not sure however how you could iterate resources in the looked up stack construct.
It might be not be the best answer, however one option could be to export the outputs for resources in existing stack which you want to include in the policy, and import these values in the new stack where you create the policy.
I am making an HTTP API in AWS. Most of my logic is handled in lambda functions. I use terraform to describe and deploy my infrastructure. So far this is going well.
I have a single project that has lambdas for CRUD operations
GET /journal
POST /journal
GET /journal/{id}
PUT /journal/{id}
DELETE /journal/{id}
The code and infrastructure are all currently inside a monorepo.
Its now time to add endpoints for nested resources
for example...
/journals/{id}/sentence/{id}
and
/journals/{id}/sentence/{id}/correction
I really want the code and terraform for these sub-resources to be in a separate project because it's going to get very, very big.
I am really struggling to figure out how to make sub-resources for API gateway that exist in multiple separately deployed projects.
I want to avoid exporting the ARNs for the api-gateway resources and using them in other projects as I don't think it's a great idea to create dependencies between separate deployments.
I would really appreciate hearing any advice on how this could be managed as I am sure many people have faced this issue.
Is it possible to use route53 to route all API calls on a domain to lots of separate API gateways? If this is possible then this make life much easier. I am really struggling to find documentation or literature that explains anything beyond creating an API with single resource endpoints.
Edit: I had one other idea that maybe all my lambda projects could be completely ignorant of the api-gateway and just have their ARNs exported as outputs. Then I could have a completely separate project which defines the whole api-gateway and creates lambda integrations which simply use the function ARNs exported by the function in the other projects. This would prevent each lambda project from needing to reference the api_gateway_resource of the corresponding parent resource.
I feel like this may not be a good idea though.
I want to avoid exporting the ARNs for the api-gateway resources and
using them in other projects as I don't think it's a great idea to
create dependencies between separate deployments.
I am not sure about that statement. Independently of the way you go, you will have dependencies anyway between your project that creates the API GW and the lambda sub project.
So far I understood the question is in which direction you should create this dependency:
export the apigw and reuse it in the lambda project
export the lambda and reuse it in the apigw project
I think it makes sense for the lambda to be considered as independent piece of infrastructure in your terraform project and not create any kind of apigw related resource in it. First of all because in such case it already creates a strong implicit dependency and constraint on the usage of your lambda. And secondly you can think this way: what if your project grow more and more and you need to add more and more lambdas to your API. In such scenario you probably don't want to create each time new methods/resources and it's probably more convenient to use something like for_each in terraform and write your code one single time and automatically creates new integrations when you add new lambda.
Hence I would avoid the first choice and go for the second option which is way cleaner from an architectural stand point. Everything that deals with API GW stays in the same project. Your edits point it out to the right direction. You could have one repo for the "Infrastructure" (call it whatever you want) and one for "Lambda". You could output the lambda ARNs as a list (or a custom map with other key parameters) and use remote state from the apigw project to loop through your lambda and create the needed resource in one single place with for_each.
For example, use the remote state from apigw project:
data "terraform_remote_state" "lambda" {
backend = "s3"
config = {
bucket = "my-bucket"
key = "my/key/state/for/lambda"
region = "region"
}
}
And use the outputs like this:
resource "aws_api_gateway_integration" "lambda" {
for_each = data.terraform_remote_state.lambda.outputs.lambdas
...
uri = each.value
}
Is it possible to use route53 to route all API calls on a domain to
lots of separate API gateways? If this is possible then this make life
much easier. .
Sorry I am not sure to understand that point but I will still try to detail a bit the topic. The "link" API gateway provides you to call your api is just a DNS. When you create an API, behind the scenes AWS creates a CloudFront distribution for you in us-east-1 region. You don't have access to it through the console because it's managed by AWS. When you map an API to a domain name, you actually map the domain to the CloudFront distribution of your API. When you add methods or resources to your API (this is what you do with your lambda), you actually don't create new APIs each time.
Is there a way to incorporate existing AWS resources that were created outside of CloudFormation into an existing CloudFormation stack? I'd like to do this without having to add a new resource in the CloudFormation stack and migrate the existing resource's data over to that new resource. I see that AWS now has drift detection for CloudFormation stacks. I'm wondering if that might be able to be leveraged to incorporate existing resources into a stack.
The ability to import/adopt resources into an existing CloudFormation stack is the #1 ask from CloudFormation customers. We've been thinking about ways to do it for a while, but haven't hit upon the mechanism that both fits customer needs and works at the scale the service operates.
Since we don't expose stack state info anywhere outside the service for you to modify, the only approach you can take until we offer an adoption feature is to either store metadata about the resources in a parameter store, or use a custom resource as a wrapper to retrieve the information about the underlying resource and then surface it to your stack via Fn::GetAtt.
Now you finally can do it with Resource Import feature, references:
https://github.com/aws/aws-sdk-js/blob/master/CHANGELOG.md
https://twitter.com/shortjared/status/1193985448164691970?s=21
You can do this by passing existing resource information to your stack via Parameters. Here is an example of how to pass these parameters to the stack.
Check out this blog post from Eric Hammond describing how you can incorporate these parameters into the rest of the stack. The use-case described is a bit different in that they are optionally creating new resources if they aren't passed in, but the overall structure applies to the case you've described.
In this case I don't think Drift Detection will help you, since it will show differences between deployed resources and the configuration described in a stack. Resources defined/created outside of the stack won't be checked.
Amazons CDK (currently in the stage of developer preview as of writing) offers a way to do that:
If you need to reference a resource, such as an Amazon S3 bucket or VPC, that's defined outside of your CDK app, you can use the Xxxx.import(...) static methods that are available on AWS constructs. For example, you can use the Bucket.import() method to obtain a BucketRef object, which can be used in most places where a bucket is required. This pattern enables treating resources defined outside of your app as if they are part of your app.
Source: https://docs.aws.amazon.com/CDK/latest/userguide/aws_construct_lib.html
It also allows to import existing CloudFormation templates:
https://docs.aws.amazon.com/CDK/latest/userguide/use_cfn_template.html
Importing existing resources to stacks is now supported by CloudFormation :
Announcement from AWS : AWS CloudFormation Launches Resource Import
Instructions Via an example : HERE
Cloudformer might help you to create a new stack from existing resources and then you can add more resources to the stack. But don't know of a way to "merge" an existing stack with existing resources outside the stack.
Im my case I needed to import an ARN value from an existing SAM output in my account, so that I could add the proper invoke policy in my new stack.
I was looking for an equivalent of SAM's Fn::ImportValue, and found out that the core module has a static Fn.importValue method you can use as such:
const cdk = require('#aws-cdk/core');
const lambda = require('#aws-cdk/aws-lambda')
class MyStack extends cdk.Stack {
constructor(scope, id, props) {
super(scope, id, props);
// The below line did the trick
const arn = cdk.Fn.importValue(`your-sam-function-export-name`)
const myLambda = lambda.Function.fromFunctionArn(this, 'myLambda', arn)
// ...
}
}
Reference: https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_core.Fn.html
I'm creating a bunch of application resources with AWS CloudFormation, and when the resources are created, CloudFormation adds a hash at the end of the name to make it unique.
i.e. If you wanted to create a Kinesis stream names MyStream, the actually name would be something like my-stack-MyStream-1F8ISNCLP0W4O.
I want to be able to programmatically access the resources without having to know the hash, without having to query AWS for my resources to match the names myself, and without manual steps. Does anybody know a convenient way to use AWS resources in your application programmatically and predictably?
Here are the less ideal options I can think of:
Set a tag on the resource (i.e. name -> MyStream) and query AWS to get the actual resource name.
Query AWS for a list of resources names and look for a partial match on the expected name.
After you create your resources, manually copy the actual names into your config file (probably the sanest of these options)
You can use the CloudFormation API to get a list of resources in your stack. This will give you a list of logical ids (i.e. the name in your CloudFormation template without the hash) and matching physical ids (with the stack name and hash). Using the AWS CLI, this will show a mapping between the two ids:
aws cloudformation describe-stack-resources
--query StackResources[].[LogicalResourceId,PhysicalResourceId]
--stack-name <my-stack>
CloudFormation APIs to do the same query are provided in all the various language SDKs provided by Amazon.
You can use this as an alternative to #1, by querying CloudFormation at runtime, or #3, by querying CloudFormation at buildtime and embedding the results in a config file. I don't see any advantage to using your own tags over simply querying the CF API. #2 will cause problems if you want two or more stacks from the same template to coexist.
I've used both the runtime and build time approaches. The build time approach lets you remove the dependency on or knowledge of CloudFormation, but needs stack specific information in your config file. I like the runtime approach to allow the same build to be deployed to multiple stacks and all it needs is the stack name to find all the related resources.