Cloud build export cloud sql add date in filename - google-cloud-platform

Here is my pipeline :
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['sql', 'export', 'sql', $_DB_INSTANCE_NAME, gs://$_BUCKET_NAME/$_FILENAME.sql, '--database=$_DB_DATABASE']
options:
dynamic_substitutions: true
substitution_option: 'ALLOW_LOOSE'
timeout: 3600s
I declared my variable $_FILENAME inside cloud build pipline and I set this value Cloud_Export_$(date +%Y-%m-%d)
But I got this error : Compilation failed: [_FILENAME -> Cloud_Export_$(date +%Y-%m-%d)]: generic::invalid_argument: unable to fetch payload for date +%Y-%m-%d
So I tried to remove $() to my $_FILENAME = Cloud_Export_date +%Y-%m-%d
Exporting Cloud SQL instance...
...failed.
ERROR: (gcloud.sql.export.sql) [ERROR_RDBMS] GCS URI "gs://export_bdd/Cloud_Export_date +%Y-%m-%d.sql" is empty or invalid
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1
How can I do to add the current date to my filename ?
Edit
I tried to create another variable _CURRENT_DATE with value date +%Y-%m-%d. Then I changed my $_FILENAME variable to Cloud_Export_${_CURRENT_DATE}
Now I don't have any errors but the date is empty the filename is Cloud_Export_.sql

I found a solution I removed the $_FILENAME variable then I'm using sh entrypoint and I added double $ to the date function $$(date +%Y-%m-%d) and it works :
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'sh'
args: ['-c', 'gcloud sql export sql $_DB_INSTANCE gs://$_BUCKET_NAME/filename_$$(date +%Y-%m-%d).sql --database=$_DB_NAME']

Related

gcloud builds submit substitutions not working properly. (ERROR: (gcloud.builds.submit) INVALID_ARGUMENT.)

I have the following config:
steps:
- name: 'alpine'
args: ['echo', 'B: ${_BRANCH}', 'T: ${_TAG}', 'C => ${_CLIENT}']
If I run with:
gcloud builds submit --config=gcp/cloudbuild-main.yaml --substitutions _CLIENT='client',_BRANCH='branch',_TAG='tag' .
I get the following message:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: generic::invalid_argument: key in the template "_BRANCH" is not matched in the substitution data; substitutions = map[_CLIENT:client _BRANCH=branch _TAG=tag];key in the template "_TAG" is not matched in the substitution data; substitutions = map[_CLIENT:client _BRANCH=branch _TAG=tag];key "_CLIENT" in the substitution data is not matched in the template
If I declare the substitutions:
steps:
- name: 'alpine'
args: ['echo', 'B: ${_BRANCH}', 'T: ${_TAG}', 'C => ${_CLIENT}']
substitutions:
_BRANCH: b1
_TAG: latest
_CLIENT: c
It runs but the substitutions take only the first variable and other become values of it:
BUILD
Pulling image: alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
B: b1 T: latest C => client _BRANCH=branch _TAG=tag
PUSH
DONE
There's a syntax nit in your command which should be resolved by:
gcloud builds submit --config=gcp/cloudbuild-main.yaml --substitutions=_CLIENT="client",_BRANCH="branch",_TAG="tag" .
After submitting the build:
B: branch T: tag C => client
Reference: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values

Azure DevOps pipeline template - how to concatenate a parameter

All afternoon I have been trying to get my head around concatenating a parameter in an ADO template. The parameter is a source path and in the template a next folder level needs to be added. I would like to achieve this with a "simple" concatenation.
The simplified template takes the parameter and uses it to form the inputPath for a PowerShell script, like this:
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''
I have tried various ways to achieve this concatenation:
'$(sourcePath)/NextFolder'
see above
'$(variables.sourcePath)/NextFolder'
I know sourcePath is not a variable, but tried based on the fact that using a parameter in a task condition it apparently only works when referencing through variables
'${{ parameters.sourcePath }}/NextFolder'
And some other variations, all to no avail.
I also tried to introduce a variables section in the template, but that is not possible.
I have searched the internet for examples/documentation, but no direct answers and other issues seemed to hint to some solution, but were not working.
I will surely be very pleased if someone could help me out.
Thanx in advance.
We can add the variables in our temp yaml file and pass the sourcePath to the variable, then we can use it. Here is my demo script:
Main.yaml
resources:
repositories:
- repository: templates
type: git
name: Tech-Talk/template
trigger: none
variables:
- name: Test
value: TestGroup
pool:
# vmImage: windows-latest
vmImage: ubuntu-20.04
extends:
template: temp.yaml#templates
parameters:
agent_pool_name: ''
db_resource_path: $(System.DefaultWorkingDirectory)
# variable_group: ${{variables.Test}}
temp.yaml
parameters:
- name: db_resource_path
default: ""
# - name: 'variable_group'
# type: string
# default: 'default_variable_group'
- name: agent_pool_name
default: ""
stages:
- stage:
jobs:
- job: READ
displayName: Reading Parameters
variables:
- name: sourcePath
value: ${{parameters.db_resource_path}}
# - group: ${{parameters.variable_group}}
steps:
- script: |
echo sourcePath: ${{variables.sourcePath}}
- powershell: echo "$(sourcePath)"
Here, I just use the workingDirectory to as the test path. You can use the variables also.
Attach my build result:
Thanx, Yujun. In meantime did get it working. Apparently there must have been some typo that did block the script from executing right as the se solution looks like one of the options mentioned above.
parameters:
sourcePath: ''
steps:
- task: PowerShell#2
inputs:
filePath: 'PSRepo/Scripts/MyPsScript.ps1'
arguments: '-inputPath ''$(sourcePath)/NextFolder''

A playbook with two roles: running role B complains with role A's code which successfully ran

I am experiencing a strange behavior: when I run role B, it complains role A's code which I can successfully run! I have reproduced this to this minimal example:
$ cat playbooka.yml
- hosts:
- host_a
roles:
- role: rolea
tags:
- taga
- role: roleb
tags:
- tagb
I have tagged two roles because I want to selectively run role A or role B, they consist simple tasks as shown below in this minimal example:
$ cat roles/rolea/tasks/main.yml
- name: Get service_facts
service_facts:
- debug:
msg: '{{ ansible_facts.services["amazon-ssm-agent"]["state"] }}'
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
meta: end_play
$ cat roles/roleb/tasks/main.yml
- debug:
msg: "I am roleb"
The preview confirms that I can run individual roles as specified by tags:
$ ansible-playbook playbooka.yml -t taga -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
rolea : Get service_facts TAGS: [taga]
debug TAGS: [taga]
$ ansible-playbook playbooka.yml -t tagb -D -C --list-hosts --list-tasks
playbook: playbooka.yml
play #1 (host_a): host_a TAGS: []
pattern: ['host_a']
hosts (1):
3.11.111.4
tasks:
debug TAGS: [tagb]
I can run role A OK:
$ ansible-playbook playbooka.yml -t taga -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : Get service_facts] ******************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
TASK [rolea : debug] ******************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4] => {
"msg": "running"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************
3.11.111.4 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
But when I run role B, it complains the code in role A which I just successfully ran!
$ ansible-playbook playbooka.yml -t tagb -D -C
PLAY [host_a] *************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************
ok: [3.11.111.4]
ERROR! The conditional check 'ansible_facts.services["amazon-ssm-agent"]["state"] != "running"' failed. The error was: error while evaluating conditional (ansible_facts.services["amazon-ssm-agent"]["state"] != "running"): 'dict object' has no attribute 'services'
The error appears to be in '<path>/roles/rolea/tasks/main.yml': line 9, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- when: ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
^ here
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
I have two questions:
Why role A's code should be involved at all?
Even it gets involved, ansible_facts has services, and the service is "running" as shown above by running role A.
PS: I am using the latest Ansible 2.10.2 and latest python 3.9.1 locally on a MacOS. The remote python can be either 2.7.12 or 3.5.2 (Ubuntu 16_04). I worked around the problem by testing if the dictionary has the services key:
ansible_facts.services is not defined or ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
but it still surprises me that role B will interpret role A's code and interpreted it incorrectly. Is this a bug that I should report?
From the notes in meta module documentation:
Skipping meta tasks with tags is not supported before Ansible 2.11.
Since you run ansible 2.10, the when condition for your meta task in rolea is always evaluated, whatever tag you use. When you use -t tagb, ansible_facts.services["amazon-ssm-agent"] does not exist as you skipped service_facts, and you then get the error you reported.
You can either:
upgrade to ansible 2.11 (might be a little soon as I write this answer since it is not yet available over pip...)
rewrite your condition so that the meta task skips when the var does not exists e.g.
when:
- ansible_facts.services["amazon-ssm-agent"]["state"] is defined
- ansible_facts.services["amazon-ssm-agent"]["state"] != "running"
The second solution is still a good practice IMO in whatever condition (e.g. share your work with someone running an older version, running accidentally against a host without the agent installed....).
One other possibility in your specific case is to move the service_facts tasks to an other role higher in play order, or in the pre_tasks section of your playbook, and tag it always. In this case the task will always play and the fact will always exists, whatever tag you use.

GCP Dataproc - Error: Unknown name "optionalComponents" at 'cluster.config': Cannot find field

I am trying to create dataproc cluster using configurations mentioned in YAML file (using import):
The command I have been using successfully:
$ gcloud beta dataproc clusters import $CLUSTER_NAME --region=$REGION
--source=cluster_conf_file.yaml
Later on I tried adding HABSE component which is a part of available optional components using attribute --optional-components:
$ gcloud beta dataproc clusters import $CLUSTER_NAME --optional-components=HBASE --region=$REGION
--source=cluster_conf_file.yaml
(Documentation referred:
https://cloud.google.com/dataproc/docs/concepts/components/hbase#installing_the_component)
Which caused below error:
ERROR: (gcloud.beta.dataproc.clusters.import) unrecognized arguments: --optional-components=HBASE
Then I tried adding the attribute --optional-components as optionalComponents in the YAML file (instead of passing through command line) by referring this documentation.
Sample YAML:
config:
endpointConfig:
enableHttpPortAccess: BOOLEAN_VALUE
configBucket: BUCKET_NAME
gceClusterConfig:
serviceAccount: SERVICE_ACCOUNT
subnetworkUri: SUBNETWORK_URI
tags:
- Tag1
- TAG2
optionalComponents: <---- Attribute causing error
- HBASE
softwareConfig:
imageVersion: IMAGE_VERSION
properties:
PROPERTY: VALUE
.
.
.
masterConfig:
diskConfig:
bootDiskSizeGb: SIZE
bootDiskType: TYPE
machineTypeUri: TYPE_URI
numInstances: COUNT
Which caused below error:
ERROR: (gcloud.dataproc.clusters.import) INVALID_ARGUMENT: Invalid JSON payload received. Unknown name "optionalComponents" at 'cluster.config': Cannot find field.
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: "Invalid JSON payload received. Unknown name \"optionalComponents\"\
\ at 'cluster.config': Cannot find field."
field: cluster.config
Is there a way to fix this?
optionalComponents should be under config.softwareConfig:
config:
...
softwareConfig:
imageVersion: IMAGE_VERSION
optionalComponents:
- ZOOKEEPER
- HBASE
You can prove it by first creating a cluster with optional components, then export it to a YAML file.

Error when creating indexes for flexible Cloud Datastore: Unexpected attribute 'indexes' for object of type AppInfoExternal

When I access to the Cloud Datastore web management, there are no indexes listed under the "Indexes" section and I would like to define explicitly some indexes in order to run advanced queries. I have a yaml file that looks like:
indexes:
- kind: order
ancestor: no
properties:
- name: email
- name: name
- name: ownerId
- name: status
- name: updated_at
- name: created_at
direction: desc
And I run the following command to create the indexes:
gcloud preview datastore create-indexes indexes.yaml
and this is the error message that I'm getting:
"Unexpected attribute 'indexes' for object of type AppInfoExternal"
Has anybody come across the same issue? Any ideas?
Regards,
Jose
Unfortunately the create-indexes command is a little brittle: it requires that the index.yaml file that you provide is named index.yaml and not indexes.yaml. Otherwise, it will try and parse it as a different type of configuration.
Try renaming your index file to index.yaml then calling the command again.