cf v3-push with manifest and variable substitution - cloud-foundry

I have a v3 app that I want to deploy to 2 different environments. The app name and some definitions vary from env to env, but the structure of the manifest is the same. For example:
# manifest_test.yml
applications:
- name: AppTest
processes:
- type: web
command: start-web.sh
instances: 1
- type: worker
command: start-worker.sh
instances: 1
# manifest_prod.yml
applications:
- name: AppProd
processes:
- type: web
command: start-web.sh
instances: 3
- type: worker
command: start-worker.sh
instances: 5
Instead of keeping duplicate manifests with only minor changes in variables, I wanted to use a single manifest with variable substitution. So I created something like this:
# manifest.yml
- name: App((env))
processes:
- type: web
command: start-web.sh
instances: ((web_instances))
- type: worker
command: start-worker.sh
instances: ((worker_instances))
However, it seems like cf v3-apply-manifest doesn't have an option to provide variables for substitution (as cf push did).
Is there any way around this, or do I have to keep using a separate manifest for each environment?

Please try one of the cf v7 cli beta releases. I didn't test it but the output from cf7 push -h has a flag for using --vars and --vars-file. It should also use the v3 APIs so it will support things like rolling deploy.
For what it's worth, if you're looking to use CAPI v3 features you should probably use the cf7 beta releases going forward. That is going to get you the latest and greatest support for the CAPI v3.
Hope that helps!

Related

ECS with Docker Compose environment variables

I'm deploying to ECS with the Docker Compose API, however, I'm sort of confused about environment variables.
Right now my docker-compose.yml looks like this:
version: "3.8"
services:
simple-http:
image: "${IMAGE}"
secrets:
- message
secrets:
message:
name: "arn:aws:ssm:<AWS_REGION>:<AWS_ACCOUNT_ID>:parameter/test-env"
external: true
Now in my Container Definitions, I get a Simplehttp_Secrets_InitContainer that references this environment variable as message and with the correct ARN, but there is no variable named message inside my running container.
I'm a little confused, as I thought this was the correct way of passing env's such as DB-passwords, AWS credentials, and so forth.
In the docs we see:
services:
test:
image: "image"
environment:
- "FOO=BAR"
But is this the right and secure way of doing this? Am I missing something?
I haven't played much with secrets in this ECS/Docker integration but there are a couple of things that don't add up between your understanding and the docs. First the integration seems to be working with Secrets Manager and not SSM. Second, according to the doc the content won't be available as a variable but rather as a flat file at runtime at /run/secrets/message (in your example).
Check out this page for the fine details: https://docs.docker.com/cloud/ecs-integration/#secrets

Dynamic Command for Kubernetes Jobs

So hopefully this makes sense to the non-Djangoers of the k8s community. I will try my best to explain the setup / reasoning.
With Django, we have LOTS of what are called management commands that we can run within the scope and environment of our Django app that can really help development and deployment. I'm sure most other frameworks have similar, if not identical, concepts.
An example would be the "python manage.py migrate" command that ensures our codebase (migration scripts) are applied to and reflect in the associated database.
There are approx. 30 - 50 core commands we can run, we can also create our own, as well as apply those from any installed third party applications.
Anyways. The most important takeaway is that there are a lot of commands we can and do run.
Now, I have the following k8s Job to run the "migrate" command:
apiVersion: batch/v1
kind: Job
metadata:
name: asencis-web-migrate-job
spec:
template:
spec:
containers:
- name: asencis-web-migrate-job
image: asencis/asencis-base:latest
imagePullPolicy: Always
command: ['python', 'manage.py', 'migrate']
envFrom:
- configMapRef:
name: asencis-config
- secretRef:
name: asencis-secret
restartPolicy: Never
backoffLimit: 5
This job essentially runs the python manage.py migrate command within the application scope/environment. It works like a charm:
$ kubectl apply -f asencis-web-migrate-job.yaml
It's very useful in the deployment of the application when all of our tests have run, we can then build an image, "rollout restart" the cluster and then apply any migrations. It's incredibly seamless. (Not of my hat to the k8s core team for making such a useful product!)
Anyways.
My question is essentially this, can we apply an argument to the above kubectl apply command on the job, to run any command we like?
An example would be:
$ kubectl apply -f asencis-web-job.yaml --command python manage.py migrate
You will probably need to build your own tool for this.
You could use a shell script around yq, for example:
#!/bin/sh
yq eval \
".spec.template.spec.containers.[0].command.[2]=\"$1\"' \
template-web-job.yaml \
| kubectl apply -f-
You can fill in more parts of the Job this way: compute .metadata.name from $USER-$1-$(date +%s), attach labels: to the Pod to find it later, and so on.
If this wasn't a one-off Job I might recommend a more Kubernetes-native tool like Helm or Kustomize. Both tools need somewhat involved filesystem layouts, and then you need to pass the variables (script command, submitter) in some form; this wouldn't actually be easier than using yq to rewrite the YAML. Helm is a little more oriented towards mostly-stable installations (it knows about the main application Deployment and how to update it in-place).
If you have Helm already, you could build a similar script around helm template. Or if you have jq but not yq you could rewrite the Job in JSON {"apiVersion": "batch/v1", "kind": "Job", ...} but otherwise use the same script.

Understanding the following environment entry in manifest.yml pivotal cloud foundry

I have this manifest.yml:
applications:
- name: xx
buildpack: java-bp480-v2
instances: 2
memory: 2G
path: webapp/build/libs/trid.war
services:
- xxservice
- xxservice
- xxcktbrkrcnfgsvc
- xxappdynamics
- autoscaler-xx
env:
spring_profiles_active: cloud
swagger_active: false
JAVA_OPTS: -Dspring.profiles.active=cloud -Xmx1G -Xms1G -XX:NewRatio=1 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
What will env do?.
Will that create three environment variables or it will append JAVA_OPTS to the start command if the spring profile active is cloud?.
What will env do?.
The env block will instruct the cf cli to create environment variables on your behalf. Entries take the form variable_name: variable_value. From your example, you'll end up with a variable named spring_profiles_active with a value of cloud. Plus the other two you have defined.
JAVA_OPTS is a special env variable for the Java buildpack. Whatever you put into JAVA_OPTS will be included in the start command for your application. It is an easy way to add additional arguments, system properties and configuration flags to the JVM.
Please note, at least in the example above, the spacing is wrong on your env: block. It's all the way to the left, but the env:should be indented two spaces. Then each env variable defined under theenv:` block should be indented two more spaces for a total of four spaces. YAML is very picky about spaces and indentation. When in doubt, use a YAML validator to confirm your YAML is valid.

Reuse a cloudfoundry app without having to rebuild from sratch

I deploy a Django Application with Cloudfoundry. Building the app takes some time, however I need to launch the application with different start commands and the only solution I have today is fully to rebuild each time the application.
With Docker, changing the start command is very easy and it doesn't require to rebuild to the whole container, there must be a more efficient way to do this:
Here are the applications launched:
FrontEndApp-Prod: The Django App using gunicorn
OrchesterApp-Prod: The Django Celery Camera & Heartbeat
WorkerApp-Prod: The Django Celery Workers
All these apps are basically identical, they just use different routes, configurations and start commands.
Below is the file manifest.yml I use:
defaults: &defaults
timeout: 120
memory: 768M
disk_quota: 2G
path: .
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/buildpack-python.git
services:
- PostgresDB-Prod
- RabbitMQ-Prod
- Redis-Prod
applications:
- name: FrontEndApp-Prod
<<: *defaults
routes:
- route: www.myapp.com
instances: 2
command: chmod +x ./launch_server.sh && ./launch_server.sh
- name: OrchesterApp-Prod
<<: *defaults
memory: 1G
instances: 1
command: chmod +x ./launch_orchester.sh && ./launch_orchester.sh
health-check-type: process
no-route: true
- name: WorkerApp-Prod
<<: *defaults
instances: 3
command: chmod +x ./launch_worker.sh && ./launch_worker.sh
health-check-type: process
no-route: true
Two options I can think of for this:
You can use some of the new v3 API features and take advantage of their support for multiple processes in a Procfile. With that, you'd essentially have a Profile like this:
web: ./launch_server.sh
worker: ./launch_orchester.sh
worker: ./launch_worker.sh
The platform should then stage your app once, but deploy it three times based on the droplet that is produced from staging. It's slick because you end up with only one application that has multiple processes running off of it. The drawback is that this is a experimental API at the time of me writing this, so it still has some rough edges, plus the exact support you get could vary depending on how quickly your CF provider installs new versions of the Cloud Controller API.
You can read all the details about this here:
https://www.cloudfoundry.org/blog/build-cf-push-learn-procfiles/
You can use cf local. This is a cf cli plugin which allows you to build a droplet locally (staging occurs in a docker container on your local machine). You can then take that droplet and deploy it as much as you want.
The process would look roughly like this, you'll just need to fill in some options/flags (hint run cf local -h to see all the options):
cf local stage
cf local push FrontEndApp-Prod
cf local push OrchesterApp-Prod
cf local push WorkerApp-Prod
The first command will create a file ending in .droplet in your current directory, the subsequent three commands will deploy that droplet to your provider and run it. The net result is that you should end up with three applications, like you have now, that are all deployed from the same droplet.
The drawback is that your droplet is local, so you're uploading it three times once for each app.
I suppose you also have a third option which is to just use a docker container. That has it's own advantages & drawbacks though.
Hope that helps!

Ansible - How to launch(purchase) a reserved EC2 instance

How can I launch(purchase) a reserved EC2 instance using Ansible with EC2 module? I've googled using words something like 'ec2 reserved instance ansible' but no joy.
Or should I use AWS CLI instead?
Or you can create Ansible module.
Also there are already created modules that you can use as examples ansible-modules-extras/cloud/amazon.
PS:
Modules can be written in any language and are found in the path
specified by ANSIBLE_LIBRARY or the --module-path command line option.
By default, everything that ships with ansible is pulled from its
source tree, but additional paths can be added.
The directory ”./library”, alongside your top level playbooks, is also
automatically added as a search directory.
I just made a PR which might help you.
You could use it as follows:
- name: Purchase reserved instances
boto3:
name: ec2
region: us-east-1
operation: purchase_reserved_instances_offering
parameters:
ReservedInstancesOfferingId: 9a06095a-bdc6-47fe-a94a-2a382f016040
InstanceCount: 3
LimitPrice:
Amount: 123.0
CurrencyCode: USD
register: result
- debug: var=result
If you're interrested by this feature, feel free to vote up on the PR. :)
I looked into the Cloud module list and found there isn't any modules out of the box that supports reserved instance - I think you try building a wrapper over the AWS CLI or Python Boto SDK [ or any SDK ].
This is the pseudo code for the playbook :
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: 'Calling Python Code to reserve instance'
raw: python reserve-ec2-instance.py args