How do I add authToken for "Trigger builds remotely" to job's config.xml - jenkins-job-dsl

I want to enable the "Trigger builds remotely" option for a Jenkins job, with an authentication token defined. I tried this:
freeStyleJob('Sandbox/test-trigger') {
configure { project ->
(project / 'authToken').setValue('mytoken')
}
}
According to http://job-dsl.herokuapp.com/, I end up with an authToken line on the top level of the project's config XML (as desired):
<project>
[...]
<authToken>mytoken</authToken>
</project>
However, after running the Job-DSL, I do not get the authToken defined in the resulting XML, nor is the option enabled in the config.
Any ideas what I'm doing wrong?
Using Jenkins 1.609.2 with job-dsl 1.37.
UPDATE: job-dsl >= 1.39 now supports the token setting; see https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.jobs.FreeStyleJob.authenticationToken

You can simply use:
FreeStyleJob {
authenticationToken('mytoken')
...
}
It does not have DSL API docs, but the DSL API viewer generates one for you. You can view it at
<YourJenkinsURL>/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.jobs.FreeStyleJob.authenticationToken

It was fixed when I moved the "configure" block as first part of the job definition.
So instead of:
freeStyleJob('Sandbox/test-trigger') {
<lots of other job config>
configure { project ->
(project / 'authToken').setValue('mytoken')
}
}
I changed it to:
freeStyleJob('Sandbox/test-trigger') {
configure { project ->
(project / 'authToken').setValue('mytoken')
}
<lots of other job config>
}
Now the token configuration was properly kept in job config.

If you want to avoid hard coding your token you and are using the dynamic dsl plugin:
In your
Jenkinsfile.build
string(credentialsId: 'deploy-trigger-token', variable: 'TRIGGER_TOKEN'),
]) {
jobDsl targets: ".jenkins/deploy_${env.INSTANCE}_svc.dsl",
ignoreMissingFiles: true,
additionalParameters: [
trigger_token: env.TRIGGER_TOKEN
]
}
Then in your dsl file:
pipelineJob("Deploy Service") {
...
authenticationToken (trigger_token)
...
}
And you will need to configure the credential deploy-trigger-token in your jenkins credential store.

Related

AWS App Runner does not see runtime_environment_variables from Terraform module

Using Terraform with module aws_apprunner_service I'm creating AWS App Runner service. According to documentation I should be able to pass env variables as map.
In my case Service is created fine, but no any runtime_environment_variables are passed to App Runner. All the others variables provided by AWS are present.
App Runner does not have panel for env variables, so I listed all available using NodeJS and console log console.log(process.env)
Creating App Runner from AWS console and adding variables works correctly and I can see all default variables and also my custom variables.
My configuration of module
resource "aws_apprunner_service" "apprunner" {
service_name = var.name
source_configuration {
authentication_configuration {
access_role_arn = var.role_arn
}
image_repository {
image_configuration {
port = var.port
runtime_environment_variables = {
"test" = "xxx"
}
}
image_identifier = var.image
image_repository_type = var.repository_type
}
}
}
It's a bug in the provider: https://github.com/hashicorp/terraform-provider-aws/issues/19469
The fix is merged: https://github.com/hashicorp/terraform-provider-aws/pull/19471/files#diff-30b95f9698f34518d98ef0aa482508ef13b46cb094fe2fa1133019162ceb4908R707-R709
You should wait for a new tag: https://github.com/hashicorp/terraform-provider-aws/commit/3b05635c2bb9486f5156576b3701746066aa92f8

How to create an AWS SSM Document Package using Terraform

Using Terraform, I am trying to create an AWS SSM Document Package for Chrome so I can install it on various EC2 instances I have. I define these steps via terraform:
Upload zip containing Chrome installer plus install and uninstall powershell scripts.
Add that ZIP to an SSM package.
However, when I execute terraform apply I receive the following error...
Error updating SSM document: InvalidParameterValueException:
AttachmentSource not provided in the input request.
status code: 400, request id: 8d89da70-64de-4edb-95cd-b5f52207794c
The contents of my main.tf are as follows:
# 1. Add package zip to s3
resource "aws_s3_bucket_object" "windows_chrome_executable" {
bucket = "mybucket"
key = "ssm_document_packages/GoogleChromeStandaloneEnterprise64.msi.zip"
source = "./software-packages/GoogleChromeStandaloneEnterprise64.msi.zip"
etag = md5("./software-packages/GoogleChromeStandaloneEnterprise64.msi.zip")
}
# 2. Create AWS SSM Document Package using zip.
resource "aws_ssm_document" "ssm_document_package_windows_chrome" {
name = "windows_chrome"
document_type = "Package"
attachments_source {
key = "SourceUrl"
values = ["/path/to/mybucket"]
}
content = <<DOC
{
"schemaVersion": "2.0",
"version": "1.0.0",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "GoogleChromeStandaloneEnterprise64.msi.zip"
}
}
}
},
"files": {
"GoogleChromeStandaloneEnterprise64.msi.zip": {
"checksums": {
"sha256": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
}
DOC
}
If I change the file from a zip to a vanilla msi I do not receive the error message, however, when I navigate to the package in the AWS console it tells me that the install.ps1 and uninstall.ps1 files are missing (since obviously they weren't included).
Has anyone experienced the above error and do you know how to resolve it? Or does anyone have reference to a detailed example of how to do this?
Thank you.
I ran into this same problem, in order to fix it I added a trailing slash to the source url value parameter:
attachments_source {
key = "SourceUrl"
values = ["/path/to/mybucket/"]
}
My best guess is it appends the filename from the package spec to the value provided in the attachments source value so it needs the trailing slash to build a valid path to the actual file.
This is the way it should be set up for an attachment in s3:
attachments_source {
key = "S3FileUrl"
values = ["s3://packer-bucket/packer_1.7.0_linux_amd64.zip"]
name = "packer_1.7.0_linux_amd64.zip"
}
I realized that in the above example there was no way terraform could identify a dependency between the two resources i.e. the s3 object needs to be created before the aws_ssm_document. Thus, I added in the following explicit dependency inside the aws_ssm_document:
depends_on = [
aws_s3_bucket_object.windows_chrome_executable
]

Google Cloud Functions - How to correctly setup the default credentials?

I'm using Google Cloud Functions to listen to a topic in Pub/Sub and send data to a collection in Firestore. The problem is: whenever I test the function (using the test tab that is provided in GCP) and check the logs from that function, it always throws this error:
Error: Could not load the default credentials.
Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
That link didn't help, by the way, as they say the Application Default Credentials are found automatically, but it's not the case here.
This is how I'm using Firestore, in index.js:
const admin = require('firebase-admin')
admin.initializeApp()
var db = admin.firestore()
// ...
db.collection('...').add(doc)
In my package.json, these are the dependencies (I'm using BigQuery too, which raises the same error):
{
"name": "[function name]",
"version": "0.0.1",
"dependencies": {
"#google-cloud/pubsub": "^0.18.0",
"#google-cloud/bigquery": "^4.3.0",
"firebase-admin": "^8.6.1"
}
}
I've already tried:
Creating a new service account and using it in the function setting;
Using the command gcloud auth application-default login in Cloud Shell;
Setting the environment variable GOOGLE_APPLICATION_CREDENTIALS via Cloud Shell to a json file (I don't even know if that makes sense);
But nothing seems to work :( How can I configure this default credential so that I don't have to ever configure it again? Like, a permanent setting for the entire project so all my functions can have access to Firestore, BigQuery, IoT Core, etc. with no problems.
This is the code that I am using:
const firebase = require('firebase');
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const serviceAccount = require("./key.json");
const config = {
credential: admin.credential.cert(serviceAccount),
apiKey: "",
authDomain: "project.firebaseapp.com",
databaseURL: "https://project.firebaseio.com",
projectId: "project",
storageBucket: "project.appspot.com",
messagingSenderId: "",
appId: "",
measurementId: ""
};
firebase.initializeApp(config);
const db = admin.firestore();

how to configure angular to work remotely with django APIs?

I am running a web application, front-end with angular and back-end with django. the thing is: These two frameworks are not running on the same server. how can I configure angular to work remotely with APIs? (I have tested the APIs, and they are just fine)
Check setup proxy for your project from Proxying to a backend server
Basically you need to create a proxy.conf.json file and have settings like:
{
"/api": {
"target": "http://localhost:3000",
"secure": false
}
}
Then you can define your backend hostname, port and available APIs and other settings.
OK, after hours of debugging I finally found it.
FIRST Create a file named proxy.conf.json in /src folder and fill it with this json:
{
"/api": {
"target": "http://test.com/",
"secure": false,
"changeOrigin": true,
"logLevel": "info"
}
}
This line is ESSENTIAL:
"changeOrigin": true,
THEN Edit the angular.json file.In the projects section, find architect and append this line to optionssection:"proxyConfig":"src/proxy.conf.json". So it should look like this:
.
.
.
"options": {
"browserTarget": "some-name:build",
"proxyConfig": "src/proxy.conf.json"
},
.
.
.
NOTE1 Trailing comma is not allowed in JSON.
NOTE2 Loglevel gives you more information.
NOTE3 Thanks to Haifeng for his guide.

Enabling regex support on AWS Managed ElasticSearch in painless scripts

I am trying to upload templates to my AWS managed ElasticSearch.
ElasticSearch responds with a 500 error complaining that I need to set script.painless.regex.enabled to true. I know that you cannot edit the elasticsearch.yml file directly, but is there anyway to allow for support of regex in painless scripts on AWS managed ES?
There is no way yet to use regex under AWS ES cluster.
You can try to use StringTokenizer, as following example:
example value:
doc['your_str_field.keyword'].value = '{"xxx":"123213","yyy":"123213","zzz":"123213"}'
Painless script:
{
"script": {
"lang": "painless",
"inline": "String xxx = doc['your_str_field.keyword'].value; xxx = xxx.replace('{','').replace('}','').replace('\"','').replace(' ','');StringTokenizer tokenizer = new StringTokenizer(xxx, ',');tokenizer.nextToken();tokenizer.nextToken();StringTokenizer tokenizer_v = new StringTokenizer(tokenizer.nextToken(),':');tokenizer_v.nextToken();return tokenizer_v.nextToken();"
}
}
also, I needed to increase max_compilations_rate
PUT /_cluster/settings
{
"transient": {
"script.max_compilations_rate": "500/1m"
}
}