I followed this tutorial to successfully setup GCP Slack build notification. Right now, I have the following Slack message:
// createSlackMessage creates a message from a build object.
const createSlackMessage = (build) => {
const message = {
text: `Build \`${build.id}\``,
mrkdwn: true,
attachments: [
{
title: 'Build logs',
title_link: build.logUrl,
fields: [{
title: 'Status',
value: build.status
}]
}
]
};
return message;
}
In addition to what's here, I also want to have information like project ID, the user who deployed it and other environment variables I am using during deployment (e.g. I use _ENV to distinguish dev server and production server). What is the way to extract such information? Where can I find the reference to the list of objects build object has? If build doesn't have my desired object by default, can I add that somehow?
Have a look here, there you have all the options available that you can use.
Hope this helps.
UPDATE:
Not sure if you can add custom variables, but I think substitutions might be what you are looking for.
Use substitutions in your build config file to substitute specific
variables at runtime. Substitutions are helpful for variables whose
value isn't known until build time, or to re-use an existing build
request with different variable values.
Cloud Build provides built-in substitutions or you can define your own
substitutions. Use the substitutions field in your build's steps and
images fields to resolve their values at build time.
Here you have more information about them.
Let me know.
Related
In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command
In Postman, I can create a set of common tests that run after every endpoint in the collection/folder Tests tab, like so:
pm.test("status code is 200", function () {
pm.response.to.have.status(200);
});
But how should I do this for my schema validation on the response object? Each endpoint has a different expected schema. So I have something like this on each individual endpoint:
const schema = { type: 'array', items: ... }
pm.test('response has correct schema', function () {
const {data} = pm.response.json();
pm.expect(tv4.validate(data, schema)).to.be.true;
});
I can't extract this up to the collection level because each schema is different.
Now I find that I want to tweak that test a little bit, but I'll have to copy-and-paste it into 50 endpoints.
What's the recommended pattern here for sharing a test across endpoints?
I had the same issue some years ago, and I found two ways to solve this:
Create a folder for each structure object
You can group all your test cases that share the same structure into a new folder and create a test case for this group. The issue with this is that you will be repeating the requests in other folders. (For this solution, you will need to put your "tests cases" into the folder level)
Create a validation using regular expressions (recommended)
Specify in the name of each request a set of rules (these rules will indicate what kind of structure or call they might have). Then you create a validation in the first parent folder for each type of variation (using regex). (You will need to create documentation and some if statements in your parent folder)
E.g.: [POST] CRLTHM Create a group of homes
Where each initial is meaning to:
CR: Response must be 201
LT: The response must be a list of items
HM: The type of the response must be a home object
And the regex conditional must be something like this (this is an example, please try to make your regex accurate):
if(/CRLTHM\s/.test(pm.info.requestName))
(In this image, NA is referring just to Not Authenticated)
I have been trying this example provided in the Google's Deployment Manager GitHub project.
It works, yet I am not sure what is the purpose of creating three instances named instance_create, instance_update and instance_delete.
For example, taken from the link:
instance_create = {
'name':
'instance_create',
'action':
'gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create',
'properties': {
'parent': project_path,
'instanceId': instance_name,
'clusters': copy.deepcopy(initial_cluster),
'instance': context.properties['instance']
},
'metadata': {
'runtimePolicy': ['CREATE']
}
}
What is the purpose of `action` and `metadata`.`runtimePolicy`? I have tried to find it in the documentation but failed miserably.
Why there are three `BigTable` instances there?
You are right, the documentation is missing the information, which would answer your questions regarding these parameters.
However, it helps knowing what's going on in the Depoyment Manager example you linked.
First of all, the following line in the config.yaml is where the things get tricky:
resources:
- name: my-bigtable
type: bigtable.py
This line will do a call to the bigtable.py python file, which sets the resource type of the deployment to that which are in it, under the GenerateConfig function. See how this is done here.
The resources are returned as {'resources': resources} at the end of it, being the resources variable a list of templates created there.
These templates have different name identifiers, which are set by the "name" tag.
So you are not creating three different instances with the name of instance_create, instance_update and instance_delete in this file, but you are creating three templates with those names, that will later be appended to the resources list, and later returned to the config.yaml resources.type tag.
These templates then will be sequentially build and executed by the deployment manager, once the create command is used. Note that they might appear out of order, this is due not using a schema.
It's easier to see this structure in a .yaml file format, for example, built with jinja, the template you posted would be:
resources:
- action: gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create
name: instance_create
metadata:
runtimePolicy:
- CREATE
properties:
clusters:
initial:
defaultStorageType: HDD
location: projects/<PROJECT_ID>/locations/<PROJECT_LOCATION>
serveNodes: 4
instance:
displayName: My BigTable Instance.
type: PRODUCTION
instanceId: my-instance
parent: projects/<PROJECT_ID>
Notice that the parameters under properties are the fields in the request body to bigtableadmin.projects.instances.create (which is nesting a clusters object parameters and a instance object parameters). Note that the InstanceId under properties is always the same, hence the BigTable instance, on which the templates do the calls, is always the same one.
The thing is that, not only the example you linked creates various templates to be run in the same script, but that the resource type for each template is a call to the BigTable API.
Normally the template resources are specified with the type tag, but since you are calling a resource that is directly running an API call (i.e. instead of just specifying gcp-types/bigtableadmin-v2, you are specifying bigtableadmin-v2:bigtableadmin.projects.instances.create), the action tag is used. I haven't found this difference on usage documented anywhere, but it needs to be specified like that.
You will know if you are calling an API 'endpoint' directly if the resource ends with either create/update/delete.
Finally, the I have investigated in my side, and the metadata.runtimePolicy is tied to the fact that the resource type is an API call (like in the previous point). Once again, I haven't found this documented anywhere.
However, since this is a requirement, you will always have to set the correct value in this field. It basically boils down to have metadata.runtimePolicy set to this values, depending on which type of API call you do:
create -> ['CREATE']
update -> ['UPDATE_ON_CHANGE']
delete -> ['DELETE']
Summarizing:
You are not creating three different instances, but three different templates, which do the work on the same BigTable instance.
You need to change the resource type flag to action if you are calling an API endpoint (create/update/delete), instead of just naming the base API.
The metadata.runtimePolicy value is a requirement when doing a call to one of the aforenamed endpoints.
I'm in the process of automating the setup of git repositories for a client. This involves setting up build template definitions in TFS 2015. I've chosen to store the configuration of these templates in a separate repository along with many other build / deploy related files (there are hundreds of repos, and they all share similar builds by default).
I'm using the new on-prem TFS 2015 build system (very similar to the VSTS / VSO build system), and querying the API for all kinds of things. Build definitions and Build Definition Templates are what I'm working with right now, which is where my specific problem lies.
A number of the build variables contain API keys which I would like to keep private:
When queried (for either a stored Build Definition or a Build Definition Template), TFS rightly returns a blank string for any variables that are set as "secure" - but the response JSON does not contain any reference to the variable being "secret". Here's an excerpt of JSON returned when I retrieve a Build Definition (its the same for a Build Definition Template):
Call Uri (defined in documentation)
GET
https://{instance}/DefaultCollection/{project}/_apis/build/definitions/{definitionId}?api-version={version}[&revision={int}]
Excerpt
"variables": {
"system.debug": {
"value": "false",
"allowOverride": true
},
"system.prefergit": {
"value": "true"
},
"MySecureVariable": {
"value": ""
}
}
I would expect there to be a property like "isSecret" or similar, but I can't find any documentation or examples referring to this. The docs also fail to mention if it's not supported, so I'm at a loss.
If I set the variable to a value (i.e. I use a Build Definition Template to create a new Build Definition), the "secret" option is disabled in the UI, and the key is visible. You could argue that someone could go into the UI and mark the variable as secret manually, but that's an additional step that lends itself to forgetfulness.
These secret variables are stored in a password safe, and I will be prompting the person who runs the scripts to enter them at runtime (or I will automate the read from the safe - not sure yet). That's beside the point though.
Can anyone advise on how I can set secure variables directly via the API?
I figured it out by doing some debugging / HTTP traffic monitoring whilst setting variables in the TFS UI. Basically, my guess was correct!
No matter how you set up the build definition templates, or build definitions, TFS never reports that a variable is secret via the REST API. However, if you store your templates elsewhere (or manipulate the JSON object returned from the "Get Build Definition" or "Get Build Definition Template" REST API call) you can set the setting yourself.
My example above becomes:
"variables": {
"system.debug": {
"value": "false",
"allowOverride": true
},
"system.prefergit": {
"value": "true"
},
"MySecureVariable": {
"value": "",
"isSecret": "true"
}
}
Saving this setting in a Build Definition Template via the REST API does not mean that variable will be marked as secret if you use that template via the TFS UI. However, if you automate the creation of a Build Definition via script and set the property, the resulting Build Definition will mark the variable as secret.
What a faff about!
In SSIS, I already have a Web Service Task using a WSDL for sending SMS. I am indeed able to send SMS using this task.
I want supply values to this task from the database, such as Mobile Number, Message body, User ID, etc.
How can I create a complex type user variable that can be passed as input to a Web Service task?
It looks like the only answer is to change the web service to accept only simple types as parameters. I have scoured the web and there seems to be no way to dynamically create complex types for consumption by the input values in the web service task.
The more 'easy' way is to use the script component for bypassing variables to a web service. Check http://amolpandey.com/2016/09/26/ssis-script-task-to-obtain-geo-cordinates-from-address-text-via-google-api/ & http://www.sqlmusings.com/2011/03/25/geocode-locations-using-google-maps-v3-api-and-ssis/.
Tested and working. Using this task you can bypass the SSIS variables/parameters.
Example: Getting ID, addreess, zipcode, city, country from a table with an execute SQL Task. Change Resultset:Full result set on General tab. Then on resultset tab add Result_Name:0 & Variable_Name: User::YourObject. Then the next task will be a Forlooptask editor (Foreach ADO Enumerator ,Collection tab - Ado object source variable: User::YourObject, enumeration mode: rows in the first table, variable Mapping tab - Variable User::Id, 0 | address,1 etc.). Inside the Forlooptask editor you add a data flow task, which the source of this task will be a script component. If you be more specific about your logic,we may assist you more.
Okay so I came across the same problem. I needed to pass one parameter as complex type.
Create a Web Service task in your package.
Fill all the needed properties at General tab: HttpConnection and WSDFile
Fill properties in Input tab: Service, Method
Below click on Value, manually enter the value you need (mine is 2021-11-15)
Deploy and execute package to be sure everything is OK
After this easy steps go into folder where package is localated. Right click on package file (Package.dtsx) and select Open with > Notepad. With find function in notepad search the value you manually inserted.
The part which we are looking for looks in my case like this
<WSTask:ComplexValue>
<WSTask:ComplexProperty
WSTask:Name="date"
WSTask:Datatype="dateTime"
WSTask:ParamType="Primitive">
<WSTask:PrimitiveValue>2021-11-15</WSTask:PrimitiveValue>
</WSTask:ComplexProperty>
</WSTask:ComplexValue>
Finally I found what I was looking for. Now for the second part I needed to be that parameter changing by current date when I execute that package. In powershell I managed to write a code that change date part in string: <WSTask:PrimitiveValue>2021-11-15</WSTask:PrimitiveValue> to current date, everytime when the package is executed. The code looks like this:
$Now = Get-Date -Format "yyyy-MM-dd"
$Yesterday = (Get-Date).AddDays(-1).ToString("yyyy-MM-dd")
$file = ((Get-Content -path "C:\Package.dtsx" -Raw) -replace "<WSTask:PrimitiveValue>$Yesterday</WSTask:PrimitiveValue>", "<WSTask:PrimitiveValue>$Now</WSTask:PrimitiveValue>")
[System.IO.File]::WriteAllText("C:\Package.dtsx",$file)
# This part will execute the package #
dtexec.exe /f "C:\Package.dtsx"
After all this, I planned this script in Task Scheduler and it works.
In my case changing the type of request from complex to simple wasn't an option and all I needed was just one parameter to pass.
Hopes it gonna help somebody.