Template syntax in compose file Docker - templates

Can we use template in Docker compose file YML?
For example, I want deploy service with replicated and I want set name for container like: -servicename-_-replicId-

Short answer: yes, and it's called interpolation or variable substitution in their context: https://docs.docker.com/compose/compose-file/#variable-substitution
A bit more details: You can interpolate variable values from environment variables, but can also provide defaults in case the environment doesn't contain the necessary variable.
An example taken from the official docs looks like this:
db:
image: "postgres:${POSTGRES_VERSION}"
Now regarding your actual use case to name a container: the container name stems from a variable key and not from a variable value. So you'll have to use the container_name property to explicitly override the generated container name. See the example above: db would be the generated container name, but db isn't a property value. So to make your use case work, you should try this:
db:
container_name: "app_${CONTAINER_NAME_SUFFIX}"

There is a new project related to docker-compose templating problematics called octo-compose.
In this projects there is included:
Templating by defining custom variables,
templating by using built-in variables like instance id, ports from range, ...,
running host preparation bash scripts,
recursive octo-composes inclusion from another repo or directory,
support for Docker Swarm deployments.

Related

In an AWS lambda, how do I access the image_id or tag of the launched container from within it?

I have an AWS lambda built using SAM. I want to propagate the id (or, if it's easier, the tag) of a lambda's supporting docker image through to the lambda runtime function.
How do I do this?
Note: I do mean image id and NOT container id - what you'd see if you called docker image ls locally. Getting the container id / hostname is the easy bit :D
I have tried to declare a parameter in the template.yaml and have it picked up as an environment variable that way. I would prefer to define the value at most once within the template.yaml, and preferably have it auto-populated, though I am not aware of best practice there. The aim is to avoid human error. I don't want to pass the value on the command line unless I have to.
If it's too hard to get the image id then as a fallback the DockerTag would be fine. Again, I don't want this in multiple places in the template.yaml. Thanks!
Unanswered similar question: Finding the image ID of a container from within the container
The launched image URI is available in the packaged template file after running sam package, so it's possible to extract the tag from there.
For example, if using YAML:
grep -w ImageUri packaged.yaml | cut -d: -f3
This will find the URI in the packaged template (which looks like ImageUri: 12345.dkr.ecr.us-east-1.amazonaws.com/myrepo:mylambda-123abc-latest) and grabs the tag, which is after the 2nd :.
That said, I don't think it's a great solution. I wish there was a way using the SAM CLI.

Lambda ValueFrom Environment Variable Like in Task Definition

Is there a way to have a ValueFrom feature in Lambda's environment variable similar to what we have in Task Definition?
How it works.
We have a kv pair in parameter store /dev/db/host=localhost.
In the container definition inside the ECS task definition, we add a new environment variable DB_HOST which has a ValueFrom /dev/db/host. When a new instance of the container is run it will have the value localhost from the parameter store.
I tried on Lambda but it seems like this feature is not available. Is there another way to do this? I wonder if there is a request for this as well.
PS: I'm aware that it can be done via TerraForm or CloudFormation but that will only evaluate and copy the values from parameter store to Lambda environment variables when the infrastructure is built. The problem is some of the values are secured like DB password, thus it cannot be simply copied as it will get exposed.

How can I use GCP project from environment variable in gcp_compute dynamic inventory?

For my ansible playbooks I use dynamic invenotory gcp_compute. But I don't want define projectID and region in this file, I want read it from environment variables.
In ansible-playbook I can use lookup('env', 'FOO_BAR'), and it works. But in invetory - do not.
This guide says, that is possible to use some variables. I tryed it - it works, but there are not variables for project and region. (I also tryed use somethink like GCP_PROJECT or GCP_PROJECT_ID, but this is not work).

Creating new instances + hosts file

So I have been trying to create an Ansible playbook which creates a new instance to GCP and create a test file inside that instance. I've been using this example project from Github as template. In this example project, there is ansible_hosts -file which contains this host:
[gce_instances]
myinstance[1:4]
but I don't have any idea what it is doing actually?
The fragment your provided is Ansible technology and not actually related to anything GCP specific. This is a good reference doc: Working with Inventory.
At a high level,
[gce_instances]
myinstance[1:4]
the hosts file defines the machine identities against which Ansible is to execute against. With the hosts file, you can define groups of hosts to allow you to apply ansible playbooks to subsets of hosts at a time.
In the example, a group is created that is called gce_instances. There is nothing special or magic about the name. It isn't any kind of key word/phrase special to our story.
Within a group, we specify the hostnames that we wish to work against.
The example given is a wild-card specifier and simply short-hand for:
[gce_instances]
myinstance1
myinstance2
myinstance3
myinstance4

CronJob, django and environment variables

I have built an application for django on Openshift v3 PRO with the django-ex template. It works great. I'm using POSTGRESQL with persistent storage.
I need a scheduled cron job to fire every hour to run some django management commands. I'm using the CronJob pod for this.
My problem is this: I need to create the CronJob job with the same environment variables that the django pod was created with (DATABASE_, DJANGO_, and others), but don't see an easy way to do this.
Any help would appreciate it.
You should be able to include a list of environment variables to set as part of the containers definition in the template spec for the job. I can't properly extract the resource definition for a CronJob using oc explain in OpenShift 3.6 because of the way it is registered, but I would expect the field to be similar to:
CronJob.spec.jobTemplate.spec.template.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
FIELDS:
name <string> -required-
Name of the environment variable. Must be a C_IDENTIFIER.
value <string>
Variable references $(VAR_NAME) are expanded using the previous defined
environment variables in the container and any service environment
variables. If a variable cannot be resolved, the reference in the input
string will be unchanged. The $(VAR_NAME) syntax can be escaped with a
double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,
regardless of whether the variable exists or not. Defaults to "".
valueFrom <Object>
Source for the environment variable's value. Cannot be used if value is not
empty.