ECS with Docker Compose environment variables - amazon-web-services

I'm deploying to ECS with the Docker Compose API, however, I'm sort of confused about environment variables.
Right now my docker-compose.yml looks like this:
version: "3.8"
services:
simple-http:
image: "${IMAGE}"
secrets:
- message
secrets:
message:
name: "arn:aws:ssm:<AWS_REGION>:<AWS_ACCOUNT_ID>:parameter/test-env"
external: true
Now in my Container Definitions, I get a Simplehttp_Secrets_InitContainer that references this environment variable as message and with the correct ARN, but there is no variable named message inside my running container.
I'm a little confused, as I thought this was the correct way of passing env's such as DB-passwords, AWS credentials, and so forth.
In the docs we see:
services:
test:
image: "image"
environment:
- "FOO=BAR"
But is this the right and secure way of doing this? Am I missing something?

I haven't played much with secrets in this ECS/Docker integration but there are a couple of things that don't add up between your understanding and the docs. First the integration seems to be working with Secrets Manager and not SSM. Second, according to the doc the content won't be available as a variable but rather as a flat file at runtime at /run/secrets/message (in your example).
Check out this page for the fine details: https://docs.docker.com/cloud/ecs-integration/#secrets

Related

Can you import the BUILD_ID of a cloud build into your Cloud Run python container?

We want to use django's redis cache feature that allows us to specify version numbers which will effectively invalidate cache values of a previous build (before the code changed).
GCP's Cloud build has a default $BUILD_ID value available to the build yaml files, but is there a way for a deployed container to access this BUILD_ID value? If we could, we could us it (or a modulo value of it) to be our unique cache version.
See https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values for GCP build variables
See https://docs.djangoproject.com/en/4.0/topics/cache/#cache-arguments for django cache documentation
Use Method:prjects.builds.list API, you could use this API to get all list, and also you can query pageSize to get the number of results in the list.
When you get response from the API, you can just do what you want with your $BUILD_ID
I hope this information above is helpful.
Your cloudbuild.yaml file can pass substitution variables into a container you are deploying by using the set-env-vars flag.
# Deploy container image to Cloud Run
- id: "deploy"
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- '${_SERVICE_NAME}'
- '--image'
- 'gcr.io/$PROJECT_ID/${_SERVICE_NAME}:$COMMIT_SHA'
- '--platform=managed'
- '--region=${_DEPLOY_REGION}'
- '--vpc-connector redis'
- '--set-env-vars REDISHOST=${_REDIS_HOST},REDISPORT=${_REDIS_PORT},BUILD_ID=$BUILD_ID'
Here we pass _REDIS_HOST and _REDIS_PORT and BUILD_ID into the container as REDISHOST, REDISPORT and BUILD_ID.
We can now read these within python like so:
settings.py file:
...
redis_host = os.environ.get('REDISHOST', 'localhost')
redis_port = int(os.environ.get('REDISPORT', 6379))
build_id = int(os.environ.get('BUILD_ID', None))

.ebextensions .config file not working on Beanstalk

I need to change this setting on my AutoScaling group in my Beanstalk environment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentconfig-autoscaling-healthchecktype.html
I'm doing exactly like the example shows, but it just doesn't work. Nothing happens.
The file content:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
The structure:
my_project/
.ebextensions/
autoscaling.config
src/
...
Is there any log where I can check if the file is even being read or not?
If you use eb deploy to deploy your changed code, it will not deploy your local code, but your last git commit.
You can try to find something in the logs in /var/log, for example in eb-engine.log or messages, or even in the system logs ($ journalctl), if you configured to be able to ssh into the machine.
You can also use the web console to download the logs (like this demo link).
If you don't find something, can you give some more information about the platform (java, node.js, python) and the way you are deploying?
Bye,
Dirk
Okay I found the problem. I'm using Green/Blue environments to deploy my application, so in the process of uploading new changes to the environment they were being made to the Green Env that was being terminated after.
In order for the new config to work I had to rebuild the Blue env.
From now on all the new deploys will have the new config.

cf v3-push with manifest and variable substitution

I have a v3 app that I want to deploy to 2 different environments. The app name and some definitions vary from env to env, but the structure of the manifest is the same. For example:
# manifest_test.yml
applications:
- name: AppTest
processes:
- type: web
command: start-web.sh
instances: 1
- type: worker
command: start-worker.sh
instances: 1
# manifest_prod.yml
applications:
- name: AppProd
processes:
- type: web
command: start-web.sh
instances: 3
- type: worker
command: start-worker.sh
instances: 5
Instead of keeping duplicate manifests with only minor changes in variables, I wanted to use a single manifest with variable substitution. So I created something like this:
# manifest.yml
- name: App((env))
processes:
- type: web
command: start-web.sh
instances: ((web_instances))
- type: worker
command: start-worker.sh
instances: ((worker_instances))
However, it seems like cf v3-apply-manifest doesn't have an option to provide variables for substitution (as cf push did).
Is there any way around this, or do I have to keep using a separate manifest for each environment?
Please try one of the cf v7 cli beta releases. I didn't test it but the output from cf7 push -h has a flag for using --vars and --vars-file. It should also use the v3 APIs so it will support things like rolling deploy.
For what it's worth, if you're looking to use CAPI v3 features you should probably use the cf7 beta releases going forward. That is going to get you the latest and greatest support for the CAPI v3.
Hope that helps!

GCP cloudbuild.yaml: kmsKeyName requires hardcoded value. How can we adapt for separate environments?

We have a two separate GCP projects (one for dev, and one for prod). We are using CloudBuild to deploy our project by utilizing repo-mirroring and a CloudBuild trigger that fires when ever the dev or prod branches are updated. The cloudbuild.yaml file looks like this:
# Firestore security rules deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:rules"]
secretEnv: ['FIREBASE_TOKEN']
# Firestore indexes deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:indexes"]
secretEnv: ['FIREBASE_TOKEN']
secrets:
- kmsKeyName: 'projects/my-dev-project/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
timeout: "1600s"
The problem we have is that the kmsKeyName apparently needs to be hardcoded in order for GCP to read it, meaning we can't do something like this:
secrets:
- kmsKeyName: 'projects/$PROJECT_ID/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
This does not lend itself well to a continuous-deployment process like the one we are using since we'd like that kmsKeyName string to be dynamically set with the relevant project-id value depending on the dev or prod environment we are deploying to.
Is there a way around this that would allow us to dynamically specify the kmsKeyName?
Update:
We have found a quick/dirty solution which was to create individual cloudbuild.yaml files: one for dev (cloudbuild-dev.yaml) and one for prod (cloudbuild-prod.yaml). Each cloudbuild file is identical except for the last part where we specify our hardcoded "secrets" info.
Explanation: GCP Cloud Build relies on individual triggers for each environment build, and each trigger can be configured to point at a specific cloudbuild yaml file, whcih is what we have done. Dev build trigger points at cloudbuild-dev.yaml, and the production trigger points at cloudbuild-prod.yaml.
Indeed, I tried different configuration with simple quote, double, without, with substition variables,...
The boring solution is to use the manual decoding as described here. But you can use variables and substitution variables as you want
The boring part is that you have to inject the secret in each step which require it, like that (for example as environment variable):
- name: "gcr.io/$PROJECT_ID/firebase"
entrypoint: "bach"
args:
- "-c"
- "export FIREBASE_TOKEN=$(cat secrets.json) && firebase deploy --only firestore:rules"
I don't know other workaround

How to use custom image in gcloud?

I am quiet new to devops and I am trying to use a custom docker image that I have pushed to docker hub.
In my app.yaml I have replace runtime: python by runtime:solalsab/clarins. Is the approach correct and secondly I get an error message as follow:
Value 'solalsab/clarins' for runtime does not match expression '^(?:((gs://[a-z0-9\-\._/]+)|([a-z][a-z0-9\-\.]{0,29})))$'
In the app.yaml it should be runtime: custom and env: flex.
The image should be defined in the Dockerfile: FROM solalsab/clarins
Check this Custom Runtimes Quickstart.