deploy different resources using deployment manager? - google-cloud-platform

I'm planning to use the deployment manager to deploy a new project for each of our client.
I'm just wondering can I do the following using the deployment manager or put into script/YAML, so it deploys all components all at once through the command shell?
create a new GCP project
create a VPC for the client with custom subnet assigned
create a VM and set the network to the custom VPC/subnet
create an app engine with different services using the yaml file
create storage buckets
create cloud Postgres SQL instance
What I tried so far, I can deploy the VM only through the deployment manager, I can do them individually using the command line, but not using the deployment manager in one single step.
Thanks for your help.

Deployment Manager should work perfectly for this type of setup. There are a few minor caveats though.
You need to have a project in place where you can run deployment manager from
You will need to provide the deployment manager service account all the required permissions before creating the deployment (such as project creator at the org level). The service account is [PROJECT_ID]#cloudservices.gserviceaccount.com
Next, you will want to call each of the resources individually in your deployment manager manifest, luckily all these resource APIs are supported by DM:
Projects to create the project.
** All following resources should make a reference to this resource to create a dependancy so that DM does not try to create them before the project exists... which would result in a failure
VPC and VMs: use something like this
** This includes adding GKE clusters at the end and a VPC peering you won't need, but it demonstrates the creation of a VPC, subnets, firewall rules and a VM
App Engine
GCS Bucket
SQL instance
As long as your overall config is less than 1 MB, you can place all these resources into a single config.
If you are new to DM, I recommend trying each of these resources individually to make sure that you have the syntax correct. Trying to debug syntax errors with multiple resources is much more difficult.
I also recommend using the --preview flag before creating or updating resources so that you can make sure that your configurations or changes will come into effect the way you planned.
Finally, you can either write all this directly into a YAML config or you can create templates using either jinja or python2 which can be imported into your config.yaml

Please take a look at the Deployment Manager Cloud Foundation Toolkit which is a sets of well designed templates.

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

Deploy new container revision to Cloud Run without changing Terraform

I am setting up a CI&CD environment for a GCP project involves Cloud Run. While setting up everything via Terraform is pretty much straightforward, I cannot figure out how to update the environment when the code changes.
The documentation says:
Make a change to the configuration file.
But that couples the application deployment to terraform configuration, which should be responsible only for infrastructure deployment.
Ideally, I use terraform to provision the infrastructure, and another CI step to build and deploy the container.
Is there a best-practice here?
Relevant sources: 1.
I ended up separating Cloud Run service creation (which is still done in Terraform) and deployment to two different workflows.
The key component was to make terraform ignore the actual deployed image so that when the code deployment workflow is done, terraform won't complained that the Cloud Run image is different from the one it manages. I achieved this by setting ignore_changes = [template[0].spec[0].containers[0].image] on the google_cloud_run_service resource.

How can I create realistic traffic on an AWS network via CLI

I'm working as an intern on a project where I have to create a CTF mission (like hackthebox and other CTF-related websites). Making use of Terraform, I automatically deploy instances which are fully configured in VPC's/Subnets/AZ's etc..
Ultimately I'm struggling to create realistic, yet fake, traffic in this dynamically deployed environment to which I only have access to the "local-exec" and "data-template", rendering GUI tools useless seeing as I need it to be done via CLI.
Does anyone have any suggestions as to tools I could use to simplify this task?

Extract Entire AWS Setup into storable Files or Deployment Package(s)

Is there some way to 'dehydrate' or extract an entire AWS setup? I have a small application that uses several AWS components, and I'd like to put the project on hiatus so I don't get charged every month.
I wrote / constructed the app directly through the various services' sites, such as VPN, RDS, etc. Is there some way I can extract my setup into files so I can save these files in Version Control, and 'rehydrate' them back into AWS when I want to re-setup my app?
I tried extracting pieces from Lambda and Event Bridge, but it seems like I can't just 'replay' these files using the CLI to re-create my application.
Specifically, I am looking to extract all code, settings, connections, etc. for:
Lambda. Code, Env Variables, layers, scheduling thru Event Bridge
IAM. Users, roles, permissions
VPC. Subnets, Route tables, Internet gateways, Elastic IPs, NAT Gateways
Event Bridge. Cron settings, connections to Lambda functions.
RDS. MySQL instances. Would like to get all DDL. Data in tables is not required.
Thanks in advance!
You could use Former2. It will scan your account and allow you to generate CloudFormation, Terraform, or Troposphere templates. It uses a browser plugin, but there is also a CLI for it.
What you describe is called Infrastructure as Code. The idea is to define your infrastructure as code and then deploy your infrastructure using that "code".
There are a lot of options in this space. To name a few:
Terraform
Cloudformation
CDK
Pulumi
All of those should allow you to import already existing resources. At least Terraform has a import command to import an already existing resource into your IaC project.
This way you could create a project that mirrors what you currently have in AWS.
Excluded are things that are strictly taken not AWS resources, like:
Code of your Lambdas
MySQL DDL
Depending on the Lambdas deployment "strategy" the code is either on S3 or was directly deployed to the Lambda service. If it is the first, you just need to find the S3 bucket etc and download the code from there. If it is the second you might need to copy and paste it by hand.
When it comes to your MySQL DDL you need to find tools to export that. But there are plenty tools out there to do this.
After you did that, you should be able to destroy all the AWS resources and then deploy them later on again from your new IaC.

Terraform : Seperate modules VS one big project

I'm working on a Datalake project composed by many services : 1VPC (+ subnets, security groups, internet gateway, ...), S3 buckets, EMR cluster, Redshift, ElasticSearch, some Lambdas functions, API Gateway and RDS.
We can say that some resources are "static" as they will be created only once and will not change in the future, like : VPC + Subnets and S3 buckets
The other resources will change during the developement and production project lifecycle.
My question is what's the best way to manage the structure of the project ?
I first started this way :
-modules
.rds
.main.tf
.variables.tf
.output.tf
-emr
-redshift
-s3
-vpc
-elasticsearch
-lambda
-apigateway
.main.tf
.variables.tf
So this way i only have to do a terraform apply and it deploys all the services.
The second option (i saw some developers using it) is that each service will be in a seperate folder and then we only go the folder of the service that we want to launch it and then execute terraform apply
We will be 2 to 4 developers on this project and some of us will only work on a seperate resources.
What strategy do you advice me to follow ? Or maybe you have other idea and best practice ?
Thanks for your help.
The way we do it is separate modules for each service, with a “foundational” module that sets up VPCs, subnets, security policies, CloudTrail, etc.
The modules for each service are as self-contained as possible. The module for our RDS cluster for example creates the cluster, the security group, all necessary IAM policies, the Secrets Manager entry, CloudWatch alarms for monitoring, etc.
We then have a deployment “module” at the top that includes the foundational module plus any other modules it needs. One deployment per AWS account, so we have a deployment for our dev account, for our prod account, etc.
The deployment module is where we setup any inter-module communication. For example if web servers need to talk to the RDS cluster, we will create a security group rule to connect the SG from the web server module to the SG from the RDS module (both modules pass back their security group ID as an output).
Think of the deployment as a shopping list of modules and stitching between them.
If you are working on a module and the change is self-contained, you can do a terraform apply -target=module.modulename to change your thing without disrupting others. When your account has lots of resources this is also handy so plans and applies can run faster.
P.S. I also HIGHLY recommend that you setup remote state for Terraform stored in S3 with DynamoDB for locking. If you have multiple developers, you DO NOT want to try to manage the state file yourself you WILL clobber each other’s work. I usually have a state.tf file in the deployment module that sets up remote state.