How to achieve multiple gcs backends in terraform - google-cloud-platform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?

Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

Related

how to deploy in different environment (dev, uat ,prod) using cdk pipeline?

When I commit to develop branch it must deploy code to specific environment (dev). Similarly when i deploy to uat branch it must deploy to uat environment. How do i achieve this functionality in aws cdk pipeline ?
There is stage and be deployed to multiple region but need to define if pushed to this branch then deploy to this environment likewise.
The best approach depends on a few factors including whether your stack is environment agnostic or not (i.e. whether it needs to look up resources from within a given account.)
For simply switching between different accounts and regions, the CDK team has a decent writeup here which recommends a small wrapper script for each environment that injects the configuration by way of CDK_DEPLOY_ACCOUNT and CDK_DEPLOY_REGION environment variables.
If you want to provide other synth time context then you can do so via the context API, which allows you to provide configuration 'in six different ways':
Automatically from the current AWS account.
Through the --context option to the cdk command.
In the project's cdk.context.json file.
In the project's cdk.json file.
In the context key of your ~/.cdk.json file.
In your AWS CDK app using the construct.node.setContext method.
My team uses the inline context args to define an environment name, and from the environment name, it reads a json config file that defines many environment-dependent parameters.
cdk deploy --context env=Dev
We let the environment name determine the branch name and set it accordingly on the 'Branch' property of the 'GitHubSourceAction'. (C# code)
string env = (string)this.Node.TryGetContext("env");
var pipeline = new CdkPipeline(this, "My-Pipeline", new CdkPipelineProps()
{
SourceAction = new GitHubSourceAction(new GitHubSourceActionProps()
{
Branch = env
})
})

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

Creating DCOS service with artifacts from hdfs

I'm trying to create DCOS services that download artifacts(custom config files etc.) from hdfs. I was using simple ftp server before for it but I wanted to use hdfs. It is allowed to use "hdfs://" in artifact uri but it doesn't work correctly.
Artifact fetch ends with error because there's no "hadoop" command. Weird. I read that I need to provide own hadoop for it.
So I downloaded hadoop, set up necessary variables in /etc/profile. I can run "hadoop" without any problem when ssh'ing to node but service still ends with the same error.
It seems that environment variables configured in service are used after the artifact fetch because they don't work at all. Also, it looks like services completely ignore /etc/profile file.
So my question is: how do I set up everything so my service can fetch artifacts stored on hdfs?
The Mesos fetcher supports local Hadoop clients, please check your agent configuration and in particular your --hadoop_home setting.

Limitations of Immutable Deployments on AWS/EB

I am trying to understand the disadvantages of immutable deployments on AWS/Elastic Beanstalk. The docs say this:
You can't perform an immutable update in concert with resource configuration changes. For example, you can't change settings that require instance replacement while also updating other settings, or perform an immutable deployment with configuration files that change configuration settings or additional resources in your source code. If you attempt to change resource settings (for example, load balancer settings) and concurrently perform an immutable update, Elastic Beanstalk returns an error.
(Source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentmgmt-updates-immutable.html)
However, I am unable to come up with a practical scenario that would fail. I use CloudFormation templates for all of the configuration. Can the above be interpreted that I cannot deploy CloudFormation changes as well as changes to the application (.jar) at the same time?
I would be very thankful for clarification.
Take this with a grain of salt because it's just a guess based on reading the docs; I think basic support is $40/month, would be a good question to ask them to know for sure.
Can the above be interpreted that I cannot deploy CloudFormation changes as well as changes to the application (.jar) at the same time
I'm assuming you deploy your application .jar using a different process than your CloudFormation template. Meaning when you deploy source code you don't use CloudFormation, you maybe use a CI/CD tool e.g. Codeship. And when you make a change to your CloudFormation template, you log in to AWS Console and update the the template there (or use the AWS CLI tool).
Changing both at the same time would, I think, fall under what they're saying here. Don't do it for obvious reasons; you wouldn't want CloudFormation trying to make changes to an ec2 instance at the same time that EB is shutting down that instance and starting a new one. But a more common example would be I think if you happen to use .ebextensions for some configuration settings.
.ebextensions are a way to configure some things in EB that CloudFormation can't really do or easily do. They are config files that get deployed with your source code in a folder named /.ebextensions at the root of your project. An example is changing some specific linux settings https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You wouldn't want to make a change to your application code and an .ebextension at the same time. This is just my guess at reading the docs, you could test this out pretty easily.

AWS serverless projects cannot be shared via Git?

s-function.json needs that variable "customRole": "${myLambdaRole}",
BUT if somebody else get my serverless project via git clone he doesn't get the _meta folder.
Now he calls serverless project init with the same stage and region. That creates the _meta folder BUT it does NOT populate the s-variables-common.json with the Output Variables from s-resources-cf.json.
Now he tries to deploy with serverless dash deploy and that errors
Serverless: WARNING: This variable is not defined: myLambdaRole
Unfortunately even calling serverless resources deploy will not fix the problem because it says
Serverless: Deploying resources to stage "dev" in region "us-east-1" via Cloudformation (~3 minutes)...
Serverless: No resource updates are to be performed.
and the s-variables-common.json is still not populated with the necessary output variables.
What that means is basically that it is impossible to work as a team together at the same stage in the same region with the same resources when sharing the project via Git.
So since we don't want to check in the _meta folder into Git I would suggest that a serverless project init call should make sure that all the Output Variables are properly fetched and populated in the s-variables-common.json.
This is pretty important, or how do you guys share projects via 'Git' ?
There is a plugin called "meta sync" that should solve your problem:
https://github.com/serverless/serverless-meta-sync