Consider piece of AWS batch job definition:
MyJobDefinition:
Type: "AWS::Batch::JobDefinition"
Properties:
Type: container
Parameters: {}
JobDefinitionName: "my-job-name"
ContainerProperties:
Command:
- "java"
- "-jar"
- "my-application-SNAPSHOT.jar"
- "--param1"
- "Ref::param1"
- "--param2"
- "Ref::param2"
Which result to call:
java -jar my-application-SNAPSHOT.jar --param1 someValue1 --param2 someValue2
How do I change job definition to make it like this? (notice the = sign):
java -jar my-application-SNAPSHOT.jar --param1=someValue1 --param2=someValue2
Please note that Ref::param1 is not cloudformation template params, but aws batch job params.
As I understand, AWS batch parameters are substituted by looking for the Ref:: prefix. I could find only one thread where they tried to use a parameter in a larger string and it works.
Given that, the following should work
MyJobDefinition:
Type: "AWS::Batch::JobDefinition"
Properties:
Type: container
Parameters: {}
JobDefinitionName: "my-job-name"
ContainerProperties:
Command:
- "java"
- "-jar"
- "my-application-SNAPSHOT.jar"
- "--param1=Ref::param1"
- "--param2=Ref::param2"
Related
I have used the following deployment for the example code used in the tutorial for a google cloud function. The function should simply print the statements below when a new item is added to my bucket, (which happens every half hour)
Example code (file is also called hello_gs.py):
def hello_gcs(event, context):
print('Event ID: {}'.format(context.event_id))
print('Event type: {}'.format(context.event_type))
print('Bucket: {}'.format(event['bucket']))
print('File: {}'.format(event['name']))
print('Metageneration: {}'.format(event['metageneration']))
print('Created: {}'.format(event['timeCreated']))
print('Updated: {}'.format(event['updated']))
I deploy it with:
gcloud functions deploy hello_gcs \
--trigger-resource bucket1 \
--trigger-event google.storage.object.finalize
I get the following error in my logs
insertId: "000000-f7b8ac5b-61f2-4d37-902a-b21ab56372c9"
labels: {1}
logName: "projects/project-name-v2/logs/cloudfunctions.googleapis.com%2Fcloud-functions"
receiveTimestamp: "2021-10-20T11:38:19.093774441Z"
resource: {2}
severity: "ERROR"
textPayload: "Function cannot be initialized. Error: memory limit exceeded.
"
timestamp: "2021-10-20T11:38:18.112056018Z"
and yet the function is so simple and small I find this hard to understand?
Any ideas what I am doing wrong here, and help would be appreciated.
I am looking to train a model using Google Cloud's new service - the Unified AI Platform. To do so I am using a config.yaml that looks like this:
workerPoolSpecs:
workerPoolSpec:
machineSpec:
machineType: n1-highmem-16
acceleratorType: NVIDIA_TESLA_P100
acceleratorCount: 2
replicaCount: 1
pythonPackageSpec:
executorImageUri: us-docker.pkg.dev/cloud-aiplatform/training/tf-gpu.2-4:latest
packageUris: gs://path/to/bucket/unified_ai_platform/src_dist/trainer-0.1.tar.gz
pythonModule: trainer.task
workerPoolSpec:
machineSpec:
machineType: n1-highmem-16
acceleratorType: NVIDIA_TESLA_P100
acceleratorCount: 2
replicaCount: 2
pythonPackageSpec:
executorImageUri: us-docker.pkg.dev/cloud-aiplatform/training/tf-gpu.2-4:latest
packageUris: gs://path/to/bucket/unified_ai_platform/src_dist/trainer-0.1.tar.gz
pythonModule: trainer.task
However for distributed training I am unable to understand how to pass multiple workerPoolSpecs in this file. The example yaml file provided does not look at the case wherein I can provide multiple workerPoolSpecs.
The example's documentation also saying that "You can specify multiple worker pool specs in order to create a custom job with multiple worker pools".
Any help in this regard will be appreciated.
Answering my own question. The config.yaml file should look like this:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-16
acceleratorType: NVIDIA_TESLA_P100
acceleratorCount: 2
replicaCount: 1
containerSpec:
imageUri: gcr.io/path/to/container:v2
args:
- --model-dir=gs://path/to/model
- --tfrecord-dir=gs://path/to/training/data/
- --epochs=2
- machineSpec:
machineType: n1-standard-16
acceleratorType: NVIDIA_TESLA_P100
acceleratorCount: 2
replicaCount: 2
containerSpec:
imageUri: gcr.io/path/to/container:v2
args:
- --model-dir=gs://path/to/models
- --tfrecord-dir=gs://path/to/training/data/
- --epochs=2
All afternoon I have been trying to get my head around concatenating a parameter in an ADO template. The parameter is a source path and in the template a next folder level needs to be added. I would like to achieve this with a "simple" concatenation.
The simplified template takes the parameter and uses it to form the inputPath for a PowerShell script, like this:
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I have tried various ways to achieve this concatenation:
'$(sourcePath)/NextFolder'
see above
'$(variables.sourcePath)/NextFolder'
I know sourcePath is not a variable, but tried based on the fact that using a parameter in a task condition it apparently only works when referencing through variables
'${{ parameters.sourcePath }}/NextFolder'
And some other variations, all to no avail.
I also tried to introduce a variables section in the template, but that is not possible.
I have searched the internet for examples/documentation, but no direct answers and other issues seemed to hint to some solution, but were not working.
I will surely be very pleased if someone could help me out.
Thanx in advance.
We can add the variables in our temp yaml file and pass the sourcePath to the variable, then we can use it. Here is my demo script:
Main.yaml
resources:
repositories:
- repository: templates
type: git
name: Tech-Talk/template
trigger: none
variables:
- name: Test
value: TestGroup
pool:
# vmImage: windows-latest
vmImage: ubuntu-20.04
extends:
template: temp.yaml#templates
parameters:
agent_pool_name: ''
db_resource_path: $(System.DefaultWorkingDirectory)
# variable_group: ${{variables.Test}}
temp.yaml
parameters:
- name: db_resource_path
default: ""
# - name: 'variable_group'
# type: string
# default: 'default_variable_group'
- name: agent_pool_name
default: ""
stages:
- stage:
jobs:
- job: READ
displayName: Reading Parameters
variables:
- name: sourcePath
value: ${{parameters.db_resource_path}}
# - group: ${{parameters.variable_group}}
steps:
- script: |
echo sourcePath: ${{variables.sourcePath}}
- powershell: echo "$(sourcePath)"
Here, I just use the workingDirectory to as the test path. You can use the variables also.
Attach my build result:
Thanx, Yujun. In meantime did get it working. Apparently there must have been some typo that did block the script from executing right as the se solution looks like one of the options mentioned above.
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I am experiencing a strange behavior: when I run role B, it complains role A's code which I can successfully run! I have reproduced this to this minimal example:
$ cat playbooka.yml
- hosts:
- host_a
roles:
- role: rolea
tags:
- taga
- role: roleb
tags:
- tagb
I have tagged two roles because I want to selectively run role A or role B, they consist simple tasks as shown below in this minimal example:
$ cat roles/rolea/tasks/main.yml
- name: Get service_facts
service_facts:
- debug:
msg: '{{ ansible_facts.services["amazon-ssm-agent"]["state"] }}'
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
meta: end_play
$ cat roles/roleb/tasks/main.yml
- debug:
msg: "I am roleb"
The preview confirms that I can run individual roles as specified by tags:
$ ansible-playbook playbooka.yml -t taga -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
rolea : Get service_facts TAGS: [taga]
debug TAGS: [taga]
$ ansible-playbook playbooka.yml -t tagb -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
debug TAGS: [tagb]
I can run role A OK:
$ ansible-playbook playbooka.yml -t taga -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : Get service_facts] ******************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : debug] ******************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4] => {
"msg": "running"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************
3.11.111.4 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
But when I run role B, it complains the code in role A which I just successfully ran!
$ ansible-playbook playbooka.yml -t tagb -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
ERROR! The conditional check 'ansible_facts.services["amazon-ssm-agent"]["state"] != "running"' failed. The error was: error while evaluating conditional (ansible_facts.services["amazon-ssm-agent"]["state"] != "running"): 'dict object' has no attribute 'services'
The error appears to be in '<path>/roles/rolea/tasks/main.yml': line 9, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
^ here
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
I have two questions:
Why role A's code should be involved at all?
Even it gets involved, ansible_facts has services, and the service is "running" as shown above by running role A.
PS: I am using the latest Ansible 2.10.2 and latest python 3.9.1 locally on a MacOS. The remote python can be either 2.7.12 or 3.5.2 (Ubuntu 16_04). I worked around the problem by testing if the dictionary has the services key:
ansible_facts.services is not defined or ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
but it still surprises me that role B will interpret role A's code and interpreted it incorrectly. Is this a bug that I should report?
From the notes in meta module documentation:
Skipping meta tasks with tags is not supported before Ansible 2.11.
Since you run ansible 2.10, the when condition for your meta task in rolea is always evaluated, whatever tag you use. When you use -t tagb, ansible_facts.services["amazon-ssm-agent"] does not exist as you skipped service_facts, and you then get the error you reported.
You can either:
upgrade to ansible 2.11 (might be a little soon as I write this answer since it is not yet available over pip...)
rewrite your condition so that the meta task skips when the var does not exists e.g.
when:
- ansible_facts.services["amazon-ssm-agent"]["state"] is defined
- ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
The second solution is still a good practice IMO in whatever condition (e.g. share your work with someone running an older version, running accidentally against a host without the agent installed....).
One other possibility in your specific case is to move the service_facts tasks to an other role higher in play order, or in the pre_tasks section of your playbook, and tag it always. In this case the task will always play and the fact will always exists, whatever tag you use.
I am new to YAML and build pipelines. I am receiving the following error, can anyone advice what's wrong with the target folder.
Unhandled: Input required: TargetFolder
[warning]Directory 'D:\a\1\a' is empty. Nothing will be added to build
artifact 'drop'.
Below is my YAML file:
# Build app using Azure Pipelines
pool:
vmImage: 'vs2017-win2016'
steps:
- script: echo hello world
- task: NodeTool#0
inputs:
versionSpec: '8.x'
- task: CopyFiles#1
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(build.sourcesdirectory)'
Contents:
\C:\VSCodeGit\CollMod.Web\Web.config\
TartgetFolder: '$(Build.ArtifactStagingDirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
condition: succeededOrFailed()
I think it's the contents field that looks to be invalid here.
The docs at https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/copy-files?view=vsts&tabs=yaml and further documentation on https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/file-matching-patterns?view=vsts which both give some great examples.
If you're unsure set the contents to **/* which will copy absolutely everything in the $(build.sourcesdirectory), but it will give you a feel for the shape of the directory structure so that you can change **/* into something more selective and scoped for the file(s) you want to copy.
The Source folder should be : Build.SourcesDirectory instead of '$(build.sourcesdirectory)'
This is from : https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#build-variables