How to automatically add vpc and region parameters into YAML file for deploying ingress controller? - amazon-web-services

Context:
I have been successful in setting up an eks cluster using eksctl with the help of AWS's documentation. The subsections that were main useful to me are:
clusters
> creating a cluster
Networking
> pod networking
> Installing AWS LB controller.
workloads
> application load balancing
Now I am trying use the same command that I learnt from those subsection to write a shell script and automate the entire setup process up until the sample app deployment.
I am stuck where I have to find out the VPC id and region code created via the eksctl and fill them into the v2_4_3_full.yaml file. (This file creates all the components necessary to provision an ingress component under the working namespace in Kubernetes).
I feel totally blank when I think of ways to automatically do that instead of manually referring the vpc and region ids.
Below is the part of the YAML file where that has to be done. Not only do the valueS have to be filled in automatically but even the corresponding parameters have to be added in. Those parameters will be the last two from below under args. I haven't a clue on how to achieve that.
spec:
containers:
- args:
- --cluster-name=your-cluster-name
- --ingress-class=alb
- --aws-vpc-id=vpc-xxxxxxxx
- --aws-region=region-code
I am not sure what ways exist if they do. But I imagine it would be some shell command that I don't know or some python package that does that. Nonetheless, any suggestion is greatly appreciated.

Related

How to update AWS Fargate service outside AWS code deploy in order to change desired task count

When set up AWS code deploy to deploy an AWS service we have to provide 2 target groups lets say
TargetGroupBlue and TargetGroupGreen.
In the cloudformation template we use the TargetGroupBlue when linking the Service to Loadbalancer.
TargetGroupGreen is created only to be used by AWS during code deploy.
Step 1 : We executed create stack command in order to create the service and loadbalancer. We have a workable service now. Traffic is routed via TargetGroupBlue.
Step 2 : Then use code deploy to do another deploy which will the swap the target group to TargetGroupGreen once done.
Step 3 : Now we need to update the desired task count in service so use cloudformation update stack command. This fails because the targetgroup is TargetGroupGreen (as Code deploy changed it in step 2) and out cloud formation templates has used TargetGroupBlue for linking the service to Loadbalancer.
The workaround could be do all service related updates outside code deploy in a even numbered release (so always have to do code deploy twice so that we know traffic is always routed TargetGroupBlue)
Is this the way we should work with service updates via cloudformation and Code Deploy?
Please help to get this figured out.
Even though AWS provides many cool ways to work with when it comes to BlueGreen deploys with CodeDeploy or CloudFormation it really sucks.
The work around they suggested was to use Custom Resources in cloudformation which will actually trigger a lambda function to get the services updated cheating the cloudformation stack updates. Sample.
But there are no proper samples to do that so it would take lot of time to get it to work the way you need.
Furthermore, the cloudforamtion with hooks does not really work for bigger projects as the LBs cannot be shared.
So here is the open ticket, please help to put a thumbs up so the AWS will prioritize this in their roadmap.
https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/483

deploy different resources using deployment manager?

I'm planning to use the deployment manager to deploy a new project for each of our client.
I'm just wondering can I do the following using the deployment manager or put into script/YAML, so it deploys all components all at once through the command shell?
create a new GCP project
create a VPC for the client with custom subnet assigned
create a VM and set the network to the custom VPC/subnet
create an app engine with different services using the yaml file
create storage buckets
create cloud Postgres SQL instance
What I tried so far, I can deploy the VM only through the deployment manager, I can do them individually using the command line, but not using the deployment manager in one single step.
Thanks for your help.
Deployment Manager should work perfectly for this type of setup. There are a few minor caveats though.
You need to have a project in place where you can run deployment manager from
You will need to provide the deployment manager service account all the required permissions before creating the deployment (such as project creator at the org level). The service account is [PROJECT_ID]#cloudservices.gserviceaccount.com
Next, you will want to call each of the resources individually in your deployment manager manifest, luckily all these resource APIs are supported by DM:
Projects to create the project.
** All following resources should make a reference to this resource to create a dependancy so that DM does not try to create them before the project exists... which would result in a failure
VPC and VMs: use something like this
** This includes adding GKE clusters at the end and a VPC peering you won't need, but it demonstrates the creation of a VPC, subnets, firewall rules and a VM
App Engine
GCS Bucket
SQL instance
As long as your overall config is less than 1 MB, you can place all these resources into a single config.
If you are new to DM, I recommend trying each of these resources individually to make sure that you have the syntax correct. Trying to debug syntax errors with multiple resources is much more difficult.
I also recommend using the --preview flag before creating or updating resources so that you can make sure that your configurations or changes will come into effect the way you planned.
Finally, you can either write all this directly into a YAML config or you can create templates using either jinja or python2 which can be imported into your config.yaml
Please take a look at the Deployment Manager Cloud Foundation Toolkit which is a sets of well designed templates.

GCP - how to add labels to VM deployment

I am trying to deploy VM in GCP using deployment manager. I have created a yaml file that contains all the properties, the VM is provisioned ok but with out the labels.
I am using this property in the file
tags:
working-environment-id: vsaworkingenvironment-jwxekh7c
you are right, instead of tags you need to use labels. Take a look to this documentation for more details.

What commands/config can I use to deploy a django+postgres+nginx using ecs-cli?

I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.

Deploying multiple Deis clusters

I am looking to create a number of Deis clusters running in parallel on AWS and haven't been able to find any good documentation on how to do so. From what I understand I'd have to do the following:
When provisioning the cluster:
Create a new discovery URL
Give the stack a different name other than the standard "deis" when using the ./provision-aws-cluster.sh script
Create different Deis profiles in $HOME/.deis/client.json that map to each cluster
And when utilizing the deisctl and deis command line interfaces, I need to specify the DEISCTL_TUNNEL and the DEIS_PROFILE each time, respectively.
Am I missing anything? Will this impact my current Deis cluster if I install using the the changes listed above?
That is correct, I don't believe you are missing anything. You should save the cloud-config for each cluster (in contrib/coreos), that will have the discovery url in it and possibly other customizations depending on how your clusters will be configured. If the clusters are going to be different on the AWS side, make sure you save the cloudformation.json file for each as well.