How to run tasks in parallel for a large output in argo? - argocd

I have a large list-output and would like to run a separate task for each entry. For small outputs I can use below template. But If I increase output, e.g.
for run in {1..100000}; do
it crashes. How can I solve this problem? I tried to use artifacts, but for artifacts I cannot iterate elements of a list.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: example-large-output-
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: create-large-output
template: create-large-output-template
- - name: iterate-large-output
template: iterate-large-output-template
arguments:
parameters:
- name: fp
value: "{{item}}"
withParam: "{{steps.create-large-output.outputs.result}}"
- name: create-large-output-template
script:
image: debian:9.4
command: [bash]
source: |
o='['
for run in {1..100}; do
o=$o'"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",'
done
o=${o::-1}"]"
echo $o
- name: iterate-large-output-template
inputs:
parameters:
- name: fp
script:
image: alpine
command:
- sh
source: |
echo {{inputs.parameters.fp}}

Related

Azure DevOps Template - conditions

Is it possible to add some conditions to this way of using templates?
I would like to use conditions based on branches.
If branch = main or development > deploy to dev
Only if branch > main, deploy to staging
The complete Pipeline:
trigger:
batch: true
branches:
include:
- main
pool: 'On-Prem Pool'
variables:
- group: ContainerRegistry
- group:xyz
- name: appName
value: 'xyz'
- name: dockerfilePath
value: '**/Dockerfile'
stages:
# Build App and push to container registry
- stage: Build
displayName: Build and push to container registry
jobs:
- job: Build
displayName: Build job
pool: 'On-Prem Pool'
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: $(registryServiceConnection)
- task: Docker#2
displayName: Build container image
inputs:
command: build
repository: $(appName)
dockerfile: $(dockerfilePath)
containerRegistry: $(registryServiceConnection)
tags: $(Build.BuildId)
- task: Docker#2
displayName: Push container image
inputs:
command: push
repository: $(appName)
containerRegistry: $(registryServiceConnection)
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')) # DonĀ“t push to ACR on Pull Requests
# Infrastructure
# Lint the Bicep file.
- stage: Lint
jobs:
- template: pipeline-templates/lint.yml
# Deploy to the dev environment.
- template: pipeline-templates/deploy.yml
parameters:
environmentType: Development
resourceGroupName: rg-xx
appSuffix: xyz
costCenter: 'xyz'
serviceConnectionName: 'SVCxx'
dockerRegistryPassword: $(dockerRegistryPassword)
dockerRegistryUserName: $(dockerRegistryUserName)
containerTag: $(Build.BuildId)
registryUrl: $(registryUrl)
cpuCores: 0.5
memory: 1
minReplicas: 0
maxReplicas: 1
# Deploy to the staging environment.
- template: pipeline-templates/deploy.yml
parameters:
environmentType: Staging
resourceGroupName: rg-xxx-001
appSuffix: xyz
costCenter: 'xyz'
serviceConnectionName: 'SVCxxx'
dockerRegistryPassword: $(dockerRegistryPassword)
dockerRegistryUserName: $(dockerRegistryUserName)
containerTag: $(Build.BuildId)
registryUrl: $(registryUrl)
cpuCores: 0.5
memory: 1
minReplicas: 1
maxReplicas: 2
)
And it is the last two -templates I would like to apply some conditions on.
Many thanks!
During my test, I edited my build.yaml as below. And when I ran it, the second template could evaluate correctly but the third one not, and thus, the third template was skipped.
pool:
vmImage: windows-latest
stages:
- template: template.yml
parameters:
env: dev
- ${{ if eq(variables['Build.QueuedBy'], 'Ceeno') }}:
- template: template.yml
parameters:
env: qa
- ${{ if eq(variables['Build.QueuedBy'], 'Vacee') }}:
- template: template.yml
parameters:
env: prod
And if you are looking for the evaluation for other method or other scenario, you could share with me for further investigations.
Yes I solved it in a similar way;
- stage: BuildDeployProd
displayName: Build and deploy Prod env
jobs:
- template: pipeline-templates/deploy.yml
parameters:
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))

OpenShift Dockerfile Build that references an ImageStream?

I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have:
$ oc get imagestream openshift-build-example -o yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
name: openshift-build-example
namespace: sandbox
spec:
lookupPolicy:
local: true
I would like to be able to submit a build that uses a Dockerfile like
this:
FROM openshift-build-example:parent
But this doesn't work. If I use a fully qualified image specification,
like this...
FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent
...it works, but this is problematic, because it requires referencing
the namespace in the image specification. This means the builds can't
be conveniently deployed into another namespace.
Is there any way to make this work?
For reference purposes, the build is configure in the following
BuildConfig resource:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: buildconfig-child
spec:
failedBuildsHistoryLimit: 5
successfulBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: openshift-build-example:child
runPolicy: Serial
source:
git:
ref: main
uri: https://github.com/larsks/openshift-build-example
type: Git
contextDir: image/child
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: "GitHub"
github:
secretReference:
name: "buildconfig-child-webhook"
- type: "Generic"
generic:
secret: "buildconfig-child-webhook"
And the referenced Dockerfile is:
# FIXME
FROM openshift-build-example:parent
COPY index.html /var/www/localhost/htdocs/index.html

Azure devops: pass variable group as parameter to Template

I am using Azure devops yml pipeline at code base.
I have created Variable Group at pipeline (pipeline > Library > variable group > called 'MY_VG')
In my pipeline (yml) file i want to send this variable group MY_VG to template my_template.yml as parameter.
But this parameter MY_VG is not being expanded when i use it under 'Variable' (though while printing it gives me value)
How to access the value of this MY_VG in the template here group: ${{parameters.variable_group}} shown below?
(I am calling a template file my_template_iterator.yml which iterates the environments and call the my_template.yml)
azure-pipelines.yml
resources:
repositories:
- repository: templates
type: git
name: MY_PROJECT/GIT_REPO_FOR_TEMPLATE
stages:
- stage: "CheckOut"
displayName: Checkout
jobs:
- job: Checkout
displayName: Checkout Application
pool:
name: $(my_pool_name)
workspace:
clean: all
steps:
- checkout: self
- template: folder_name/my_template_iterator.yml#templates
parameters:
agent_pool_name: $(my_pool)
db_resource_path: $(System.DefaultWorkingDirectory)/src/main/resources/db
envs:
- env:
env_name: 'dev'
variable_group: MY_VG
pipeline_environment_name: DEV_ENV
is_master: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
## Iterator: my_template_iterator.yml
parameters:
agent_pool_name: ''
db_resource_path: ''
envs: {}
stages:
- ${{ each env in parameters.envs }}:
- template: my_template.yml
parameters:
agent_pool_name: ${{ parameters.agent_pool_name }}
db_resource_path: ${{ parameters.db_resource_path }}
env_name: ${{ env.env_name }}
variable_group: ${{ env.variable_group }}
pipeline_environment_name: ${{ env.pipeline_environment_name }}
is_master: ${{ env.is_master }}
## my_template.yml
parameters:
- name: 'variable_group'
type: string
default: 'default_variable_group'
stages:
- stage:
displayName: Read Parameters
jobs:
- job: READ
displayName: Reading Parameters
steps:
- script: |
echo variable_group: ${{parameters.variable_group}}
- stage:
displayName: Deployment
variables:
group: ${{parameters.variable_group}}
condition: ${{parameters.is_master}}
I tested with above your yaml files. It seems fine. However i found a small mistake that you missed a - when you define a variable group in my_template.yml of yours. Maybe that is the reason the variable group didnot expand for you.
variables:
# a variable group
- group: myvariablegroup
I modified your my_template.yml file a little bit. And i got it working. See below:
parameters:
- name: 'variable_group'
type: string
default: 'default_variable_group'
- name: agent_pool_name
default: ""
- name: env_name
default: ""
- name: db_resource_path
default: ""
- name: pipeline_environment_name
default: ""
- name: is_master
default: ""
stages:
- stage:
displayName: ${{parameters.env_name}}
## i changed here add '-' to group
variables:
- group: ${{parameters.variable_group}}
jobs:
- job: READ
displayName: Reading Parameters
steps:
- script: |
echo variable_group: ${{parameters.variable_group}}
- powershell: echo "$(variableName-in-variablegroup)"
Ok. I'm not sure if this doable in a way how you are trying. But I tried another approach which seems to be working.
First let's define template for simple stage:
parameters:
- name: env_name
type: string
default: 'dev'
- name: variable_group
type: string
default: 'MY_VG'
- name: pipeline_environment_name
type: string
default: 'DEV_ENV'
stages:
- stage: ${{ parameters.env_name }}
jobs:
- job: angularinstall
steps:
- script: echo "${{ parameters.env_name }}"
- script: echo "${{ parameters.variable_group }}"
- script: echo "${{ parameters.pipeline_environment_name }}"
then template for main build:
parameters:
- name: stages # the name of the parameter is stageList
type: stageList # data type is stageList
default: [] # default value of stageList
stages: ${{ parameters.stages }}
and then we can combine them together in main build file:
extends:
template: template.yaml
parameters:
stages:
- template: stageTemplate.yaml
parameters:
env_name: 'dev'
variable_group: 'MY_VG'
pipeline_environment_name: 'DEV_ENV'
- template: stageTemplate.yaml
parameters:
env_name: 'stage'
variable_group: 'MY_STAGE'
pipeline_environment_name: 'STAGE_ENV'
And for that I got:
Does it sth what bring you closer to your solution?

Azure DevOps Pipeline: same template twice in one stage

In my main pipeline in one stage, I call the same (deployment) template twice with just a bit different data:
//pipeline.yml
- stage: dev
condition: and(succeeded(), eq('${{ parameters.environment }}', 'dev'))
variables:
getCommitDate: $[ stageDependencies.prepare_date.set_date.outputs['setCommitDate.rollbackDate'] ]
jobs:
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: update
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
checkoutStep:
bash: git checkout ${{parameters.commitHash}} -- ./src/main/resources/objects
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: rollbackToDate $(getCommitDate)
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
//template.yml
parameters:
- name: command
type: string
- name: environment
type: string
- name: username
type: string
- name: password
type: string
- name: databaseName
type: string
- name: databaseIP
type: string
- name: context
type: string
- name: checkoutStep
type: step
default:
checkout: self
jobs:
- deployment: !MY PROBLEM!
pool:
name: exampleName
demands:
- agent.name -equals example
environment: ${{ parameters.environment }}
container: exampleContainer
strategy:
runOnce:
deploy:
steps:
...
My problem is that the deployment cannot have the same name twice.
It is not possible to use the ${{parameters.command}} to distinguish between deployments names, because it contains forbidden characters. Only ${{parameters.command}} differs between two calls.
My question is whether it is possible to distinguish the name of a deployment other way than passing another parameter (e.g. jobName: ). I have tried various conditions and predefined variables but without success.
Additionally, I should add DependsOn so that the second template is called for sure after the first.
It is not possible because getCommitDate and thus command parameter in your second stage contains runtime expression and job name needs compile time expression. So if you use command as job name at compile you have there rollbackToDate $(getCommitDate).
To solve this issue, the job identifier should be empty in a template:
- job: # Empty identifier
More informations available HERE

How to set dynamic values with Kubernetes yaml file

For example, a deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{Here want to read value from config file outside}}
There is a ConfigMap feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?
You can also use envsubst when deploying.
e.g.
cat app/deployment.yaml | envsubst | kubectl apply ...
It will replace all variables in the file with their values.
We are successfully using this approach on our CI when deploying to multiple environments, also to inject the CI_TAG etc into the deployments.
You can't do it automatically, you need to use an external script to "compile" your template, or use helm as suggested by #Jakub.
You may want to use a custom bash script, maybe integrated with your CI pipeline.
Given a template yml file called deploy.yml.template very similar to the one you provided, you can use something like this:
#!/bin/bash
# sample value for your variables
MYVARVALUE="nginx:latest"
# read the yml template from a file and substitute the string
# {{MYVARNAME}} with the value of the MYVARVALUE variable
template=`cat "deploy.yml.template" | sed "s/{{MYVARNAME}}/$MYVARVALUE/g"`
# apply the yml with the substituted value
echo "$template" | kubectl apply -f -
I don't think it is possible to set image through variable or Config Map in Kubernetes. But you can use for example Helm to make your deployments much more flexible and configurable.
One line:
cat app-deployment.yaml | sed "s/{{BITBUCKET_COMMIT}}/$BITBUCKET_COMMIT/g" | kubectl apply -f -
In yaml:
...
containers:
- name: ulisses
image: niceuser/niceimage:{{BITBUCKET_COMMIT}}
...
This kind of thing is painfully easy with ytt:
deployment.yml
## load("#ytt:data", "data")
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ## data.values.image
values.yml
##data/values
image: nginx#sha256:fe2fa7bb1ceb86c6d9c935bc25c3dd8cbd64f2e95ed5b894f93ae7ffbd1e92bb
Then...
$ ytt -f deployment.yml -f values.yml | kubectl apply -f -
or even better, use ytt's cousin, kapp for a high-control deployment experience:
$ ytt -f deployment.yml -f values.yml | kapp deploy -a guestbook -f -
I create a script called kubectl_create and use it to run the create command. It will substitute any value in the template that is referenced in an environment variable.
#!/bin/bash
set -e
eval "cat <<EOF
$(<$1)
EOF
" | kubectl create -f -
For example, if the template file has:
apiVersion: v1
kind: Service
metadata:
name: nginx-external
labels:
app: nginx
spec:
loadBalancerIP: ${PUBLIC_IP}
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: nginx
Run kubectl_create nginx-service.yaml and then the environment variable PUBLIC_IP will be substituted before running the actual kubectl create command.
After trying sed and envsubst I found Kustomize the most elegant and Kubernetes-native way. As an alternative also yq comes in handy sometimes.
Use Kustomize to change image name
Install the kustomize CLI (e.g. on a Mac this is brew install kustomize) and create a new file called kustomization.yaml in the same directory as your deployment.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
Now use the kustomize edit set image command to change the image name
# optionally define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
kustomize edit set image $IMAGE_NAME
Finally apply your kustomized deployment.yml to your cluster using kubectl apply -k directory/where/your/kustomization/file/is like this:
kubectl apply -k .
For debugging you can see the resulting deployment.yml if you run kustomize build . :
$ kustomize build .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
Alternative: Use yq to change image name
Install the YAML processor yq (e.g. via homebrew brew install yq), define your variables and let yq do the replacement:
# define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
yq e ".spec.template.spec.containers[0].image = \"$IMAGE_NAME\"" -i deployment.yaml
Now your deployment.yaml get's the new image version and then looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
FYI: Your deployment.yaml isn't really valid Kubernetes configuration - the template.spec.container should not reside under the metadata tag - and also it is spelled containers.
yaml does not read values from another yaml file. As an alternative approach you could try this.
kind: Pod
metadata:
creationTimestamp: null
annotations:
namespace: &namespaceId dev
imageId: &imgageId nginx
podName: &podName nginx-pod
containerName: &containerName nginx-container
name: *podName
namespace: *namespaceId
spec:
containers:
- image: *imgageId
name: *containerName
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My approach:
tools/jinja2-cli.py:
#!/usr/bin/env python3
import os
import sys
from jinja2 import Environment, FileSystemLoader
sys.stdout.write(Environment(loader=FileSystemLoader('templates/')).from_string(sys.stdin.read()).render(env=os.environ) + "\n")
Make rule:
_GENFILES = $(basename $(TEMPLATES))
GENFILES = $(_GENFILES:templates/%=%)
$(GENFILES): %: templates/%.j2 $(MKFILES) tools/jinja2-cli.py .env
env $$(cat .env | xargs) tools/jinja2-cli.py < $< > $# || (rm -f $#; false)
Inside the .j2 template file you can use any jinja syntax construct, e.g. {{env.GUEST}} will be replaced by the value of GUEST defined in .env
So your templates/deploy.yaml.j2 would look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{env.GUEST}}
Another approach (using just bash builtins and xargs) might be
env $(cat .env | xargs) cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ${GUEST}
EOF
I have been using kubetpl
It has three different template flavors and supports ConfigMap/Secret freezing.
I think the standard - Helm should be used instead of custom scripts to solve this problem nowadays. You don't need to deploy to generate Kubernets yamls on the machine.
An example:
Install helm on your machine so helm command exists
https://artifacthub.io/packages/helm/pauls-helm-charts/helloworld - Install button
helm repo add pauls-helm-charts http://tech.paulcz.net/charts
helm pull pauls-helm-charts/helloworld --version 2.0.0
tar -zxvf helloworld-2.0.0.tgz && cd helloworld
helm template -f values.yaml --output-dir helloworld . --namespace my-namespace --name-template=my-name
So it created these files from values.yaml:
wrote helloworld/helloworld/templates/serviceaccount.yaml
wrote helloworld/helloworld/templates/service.yaml
wrote helloworld/helloworld/templates/deployment.yaml
Inside values.yaml, you can change predefined repository (or 100% any value can be repeated in Kubernetes yamls as you wish):
image:
repository: paulczar/spring-helloworld
Now if you want to deploy, make sure kubectl works and just apply these generated files using kubectl apply -f serviceaccount.yaml, etc.
I create a script called kubectl_apply. It loads variables from .env, replace ${CUSTOMVAR} in yml and pass it to kubectl command
#!/bin/bash
set -a
source .env
set +a
eval "cat <<EOF
$(<$1)
EOF
" | kubectl apply -f -
I've published a command-line tool ysed that helps exactly with that, in case you plan to script it.
If you just want to change the image or a tag while your deployment is running, you could set the image of a specific container in your deployment:
kubectl apply -f k8s
kubectl set image deployments/worker-deployment worker=IMAGE:TAG
create a file called kubectl_advance as below and enjoy calling it just like kubectl commands.
e.g.
EXPORT MY_VAL="my-v1"
kubectl_advance -c -f sample.yaml # -c option is to call create command
kubectl_advance -r -f sample2.yaml # -r option is to call replace command
Assuming the yaml file has the value like ${MY_VAL} to be replaced by the environment variable.
#!/usr/bin/env bash
helpFunction()
{
echo "Supported option is [-f] for file"
exit 1
}
while getopts "f:cr" opt
do
case "$opt" in
f ) yamlFile="$OPTARG" ;;
c ) COMMAND_IS_CREATE="true" ;;
r ) COMMAND_IS_REPLACE="true" ;;
? ) helpFunction ;; # Print helpFunction in case parameter is non-existent
esac
done
echo 'yaml file is : '$yamlFile
YAML_CONTENT=`eval "cat <<EOF
$(<$yamlFile)
EOF
"`
echo 'Final File Content is :=>'
echo '------------------'
echo "$YAML_CONTENT"
if [[ "$COMMAND_IS_CREATE" == "true" ]]; then
COMMAND="create"
fi
if [[ "$COMMAND_IS_REPLACE" == "true" ]]; then
COMMAND="replace"
fi
echo "$YAML_CONTENT" | kubectl $COMMAND -f -
Helm is exactly meant for such things and a lot more. It handle complex set of resource deployment as a group etc.
But if we are still looking for some simple alternative then how about using ant?
If you want to modify the file as part of build process or test process then you can go with ant task as well.
Using ant you can load all environment values as property or you can simply load properties file like:
<property environment="env" />
<property file="build.properties" />
Then you can have a target which converts template files into your desired yaml file.
<target name="generate_from_template">
<!-- Copy task to replaces values and create new file -->
<copy todir="${dest.dir}" verbose="true" overwrite="true" failonerror="true">
<!-- List of files to be processed -->
<fileset file="${source.dir}/xyz.template.yml" />
<!-- Mapper to transform filename. Removes '.template' from the file
name when copying the file to output directory -->
<mapper type="regexp" from="(.*).template(.*)" to="\1\2" />
<!-- Filter chain that replaces the template values with actual values
fetched from properties file -->
<filterchain>
<expandproperties />
</filterchain>
</copy>
</target>
Of course you can use a fileset instead of file in case you want to change values dynamically for multiple files (nested or whatever)
Your template file xyz.template.yml should look like:
apiVersion: v1
kind: Service
metadata:
name: ${XYZ_RES_NAME}-ser
labels:
app: ${XYZ_RES_NAME}
version: v1
spec:
type: NodePort
ports:
- port: ${env.XYZ_RES_PORT}
protocol: TCP
selector:
app: ${XYZ_RES_NAME}
version: v1
env. property being loaded from environment variables and other from property file
Hope it helped :)
In the jitsi project the tpl == frep command is used to substitute values, an extension to envsubst
https://github.com/jitsi/docker-jitsi-meet/issues/65
I keep on using the old shell tools like sed and friends but such code is quickly unreadable when its more than a handful of value to deal with.
For my deployments, I typically use Helm charts. It requires me to update values.yaml files periodically.
For dynamically updating YAML files, I used 'envsubst' since it is simple and does not require sophisticated configuration.
In addition, most of the tools only work with valid Kubernetes manifests, not simple YAML files.
I created a simple script to handle the YAML modification to simplify the usage
https://github.com/alexusarov/vars_replacer
Example:
./vars_replacer.sh -i [input_file] -o [output_file] -p "[key=value] [key=value]"