Importing existing resources with multiple accounts - amazon-web-services

We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?

What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.

Related

AWS CDK multiple Apps

Would it be possible to have two CDK Apps in the same project, something like this:
from aws_cdk import core
from stack1 import Stack1
from stack2 import Stack2
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
And deploy them? Synchronously/Asynchronously?
Would it be possible to reference some resources from one app in the other one?
Yes you can have multiple applications in a CDK project, but there are some serious caveats.
A CDK process can only synth/deploy one app at a time.
They cannot be defined in the same file.
They cannot directly reference each other's resources.
To put this in perspective, each app is functionally isolated from each other and it is roughly equivalent to having two separate CDK projects just sharing the same codebase, so the use cases for this are limited.
The only way for them to share resources is either to extract it to an additional common app that must be deployed first, or for you to store the ARN of that resource in something (e.g., Parameter Store), and load it at run time. You cannot assume that the resource will exist as one of the apps may not have been deployed yet, and if you import the resource into your Stack directly, you've defeated the whole point of splitting them apart.
That is to say, this is ok:
stack1.lambda:
from ssm_parameter_store import SSMParameterStore
store = SSMParameterStore(prefix='/Prod')
ssn_arn = store['stack2.sns']
if !ssn_arn
// Doesn't matter
return
try:
sns.publish(ssn_arn, 'something')
except:
// Doesn't matter
But if it's critical to stack1 that a resource from stack2 exists, or you want to import a stack2 resource into stack1, then you either need to do a third split of all the common resources: common-resources.app.py, or there's no point splitting them.
We do this a lot in our projects, with one app creating a CodePipeline that automatically deploys the other app. However, we only do this because we prefer the pipeline lives next to the code it is deploying and it would be equally valid to extract it into an entirely new project.
If you want to do this, you need to do:
app1.py:
from aws_cdk import core
from stack1 import Stack1
app1 = core.App()
Stack1(app1, "CDK1")
app1.synth()
app2.py:
from aws_cdk import core
from stack2 import Stack2
app2 = core.App()
Stack2(app2, "CDK2")
app2.synth()
You then deploy this by running in parallel or sequentially:
cdk deploy --app "python app1.py"
cdk deploy --app "python app2.py"
Having re-read your question, the short answer is no. In testing this, I found that CDK would only create the second app defined.
You can, however, deploy multiple-stack applications:
https://docs.aws.amazon.com/cdk/latest/guide/stack_how_to_create_multiple_stacks.html
It's also possible to reference resources from one stack in another, by using core.CfnOutput and core.Fn.importValue:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/CfnOutput.html
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.core/Fn.html
Under the hood, this uses CloudFormation's ability to export outputs and import them in other stacks. Effectively your multiple stack CDK app will create nested CloudFormation stacks.
In terms of deployments, CDK creates a CloudFormation change set and deploys it, so all changes will be deployed on cdk deploy. From your perspective, it'll be synchronous, but there may be some asynchronous API calls happening under the hood through CloudFormation.
Based on your comments - mainly seeking to do parrallel deployments from your local - you don't need multiple apps.
Just open a new terminal shell and start deploying again, using stack names to define which stack you are currently deploying:
**Shell 1**
cdk deploy StackName1
> Open a new terminal window
**Shell 2**
cdk deploy OtherStackName
and they will both run simultaneously. They have no interaction with each other, and if they depend on each others resources to be deployed in a certain order this will simply be a recipe for disaster.
but if all you are looking for is speed of deployment, then yeah. This will do the trick just fine.
If this is a common action however, you'd be best advised to set up a CodePipeline with one stage having two CodeDeploy actions to deploy your stacks from the synthed templates (or two codebuilds to do the same thing using cdk deploy)
Yes, you can do pretty much the exact thing that you gave as an example in your question: have 2 apps and synthesize them into 2 separate folders. You do that by overriding outdir prop for each app, otherwise they would override each other's compiled files. See more complete example at the end.
A few caveats though!
As of the time of this writing, this is most likely unsupported. In the docs of the outdir property it says:
You should never need to set this value.
This property is intended for internal and testing use.
So take it or leave it on your own risk :)
Calling cdk synth on this project will indeed create 2 folders with the right files but the command fails with ENOENT: no such file or directory, open 'cdk.out/manifest.json'. The mentioned folder cdk.out is created too, it's just empty. So I guess the CDK team doesn't account for anyone using this approach. I don't know CDK internals well enough to be 100% sure but from a brief glance into the compiled templates, the output looks ok and should probably work.
You are limited in what you can share between the apps. Note that when you instantiate a stack, the first argument is an app. Therefore, for the second app you need a new instantiation.
You can deploy each app separately with --app flag, e.g. cdk deploy --app cdk.out.dev
Full example here:
#!/usr/bin/env node
import "source-map-support/register";
import * as cdk from "aws-cdk-lib";
import { EventInfrastructureStack } from "../lib/stacks/event-infrastructure-stack";
const devApp = new cdk.App({
outdir: "cdk.out.dev",
});
new EventInfrastructureStack(devApp, "EventInfrastructureStack", {
env: {
account: "account1",
region: "eu-west-1",
},
});
const prodApp = new cdk.App({
outdir: "cdk.out.prod",
});
new EventInfrastructureStack(prodApp, "EventInfrastructureStack", {
env: {
account: "acount2",
region: "eu-west-1",
},
});
devApp.synth();
prodApp.synth();
Now, you didn't tell us what were you trying to achieve. My goal when first looking into this was to have a separate app for each environment. CDK offers Stage construct for this purpose, docs here.
An abstract application modeling unit consisting of Stacks that should
be deployed together.
You can then instantiate (stage) multiple times to model multiple
copies of your application which should be be deployed to different
environments.
Maybe that's what you were really looking for?

Terraform destroy add the same resources

I'm facing a difficult issue to resolve.
I'm terraforming the deployement for multiple ressources on GCP's plateform.
Those ressources are all included in the terraform GCP's network module. (https://github.com/terraform-google-modules/terraform-google-network).
I'm building 2 projects with VPC (shared) and some subnetworks. Easy at first glance.
First terraform init/plan & apply was Ok, the tfstate file is on gcs backend with versionning set on true.
Today, I launch an terraform plan on it to check if everything was ok before doing some modifications.
The output of the plan is telling me that terraform wants to destroy some resources ... and recreate (adding) ... strictly the same resources ...
The code is on our bitbucket repo, no changes on it till the last apply who was ok.
I tried to retreive an old version of the tfstate files, disable the gcs backend to debug and correct it localy, but I can't find a way to refresh the current state.
I tried those tricks :
terraform refresh
terraform import (40 ressources with my little hands ... and even if import commands are working, the plan command still want to destroy my existing resources to recreate strictly the same ...)
So I'm wondering if you already encountered the same problem.
If yes, how you managed it ?
I can share my source on demand.
Terraform v0.12.9
provider.google v2.19.0
provider.google-beta v3.3.0
provider.null v2.1.2
provider.random v2.2.1
Ok, big rookie mistake, terraform providers save my day. No versions was set on the source's module version ... I just define it, replan, everything was fine again.

Import current state of my cloud AWS account with terraform

I would like to version control my cloud resources initially before using it to apply through Terraform. Is there anyway I can run a single command and store the current state of my cloud?
I have tried to use Terraform's import command:
terraform import ADR ID
But this takes a long time to identify all the resources and import them.
I have tried terraforming but this also needs a resource type to import:
terraforming s3
Is there any tool that can help in importing all existing resources?
While this doesn't technically answer your question I would strongly advise not to try and import an entire existing AWS account into Terraform in a single way even if it was possible.
If you look at any Terraform best practices an awful lot of it comes down to minimising blast radius of things so only things that make sense to be changed at the same time as each other are ever applied at the same time. Charity Majors wrote up a good blog post about this and the impact it had when that wasn't the case.
Any tool that would mass import things (eg terraforming) is just going to dump everything in a single state file. Which, as mentioned before, is a bad idea.
While it sounds laborious I'd recommend that you being your migration to Terraform more carefully and methodically. In general I'd probably say that only new infrastructure should use Terraform, utilising Terraform's data sources to look up existing things such as VPC IDs that already exist.
Once you feel comfortable with using Terraform and structuring your infrastructure code and state files in a particular way you can then begin to think about how you would map your existing infrastructure code into Terraform state files etc and begin manually importing specific resources as necessary.
Doing things this way also allows you to find your feet with Terraform a bit better and understand its limitations and strengths while also working out how your team and/or CI will work together (eg remote state and state file locking and orchestration) without tripping over each other or causing potentially crippling state issues.
I'm using terraformer to import my existing AWS infrastructure. It's much more flexible than terraforming and has no issues mentioned in answers.

Limitations of Immutable Deployments on AWS/EB

I am trying to understand the disadvantages of immutable deployments on AWS/Elastic Beanstalk. The docs say this:
You can't perform an immutable update in concert with resource configuration changes. For example, you can't change settings that require instance replacement while also updating other settings, or perform an immutable deployment with configuration files that change configuration settings or additional resources in your source code. If you attempt to change resource settings (for example, load balancer settings) and concurrently perform an immutable update, Elastic Beanstalk returns an error.
(Source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html)
However, I am unable to come up with a practical scenario that would fail. I use CloudFormation templates for all of the configuration. Can the above be interpreted that I cannot deploy CloudFormation changes as well as changes to the application (.jar) at the same time?
I would be very thankful for clarification.
Take this with a grain of salt because it's just a guess based on reading the docs; I think basic support is $40/month, would be a good question to ask them to know for sure.
Can the above be interpreted that I cannot deploy CloudFormation changes as well as changes to the application (.jar) at the same time
I'm assuming you deploy your application .jar using a different process than your CloudFormation template. Meaning when you deploy source code you don't use CloudFormation, you maybe use a CI/CD tool e.g. Codeship. And when you make a change to your CloudFormation template, you log in to AWS Console and update the the template there (or use the AWS CLI tool).
Changing both at the same time would, I think, fall under what they're saying here. Don't do it for obvious reasons; you wouldn't want CloudFormation trying to make changes to an ec2 instance at the same time that EB is shutting down that instance and starting a new one. But a more common example would be I think if you happen to use .ebextensions for some configuration settings.
.ebextensions are a way to configure some things in EB that CloudFormation can't really do or easily do. They are config files that get deployed with your source code in a folder named /.ebextensions at the root of your project. An example is changing some specific linux settings https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You wouldn't want to make a change to your application code and an .ebextension at the same time. This is just my guess at reading the docs, you could test this out pretty easily.

How to automate the Updating/Editing of Amazon Data Pipeline

I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id