Cloud Build API call to check information regarding the latest build - google-cloud-platform

I have been using the Cloud Build API in order to get the latest image information from Google Cloud Build that builds our Google Cloud Kubernetes Deployment Image into our Django Backend Application to trigger a new Job/Pod into our cluster.
Below is the code for to collect the info.
from google.cloud.devtools import cloudbuild_v1
def sample_list_builds():
# Create a client
client = cloudbuild_v1.CloudBuildClient()
# Initialize request argument(s)
request = cloudbuild_v1.ListBuildsRequest(
project_id="project_id_value",
)
# Make the request
page_result = client.list_builds(request=request)
# Handle the response
for response in page_result:
print(response)
I just want to exit the loop when the first successful build is found, however I cannot find how to compare against Status.Success. It doesn't seem to be a string. What shall I compare this against ?
images: "eu.gcr.io/.../.../...-dev:f2529...0ac00402"
project_id: "..."
logs_bucket: "gs://106...1.cloudbuild-logs.googleusercontent.com"
source_provenance {
}
build_trigger_id: "...-d5fd-47b7-8949-..."
options {
substitution_option: ALLOW_LOOSE
logging: LEGACY
dynamic_substitutions: true
pool {
}
}
log_url: "https://console.cloud.google.com/cloud-build/builds/...-1106-44d5-a634-...?project=..."
substitutions {
key: "BRANCH_NAME"
value: "staging"
}
substitutions {
key: "COMMIT_SHA"
value: "..."
}
substitutions {
key: "REF_NAME"
value: "staging"
}
substitutions {
key: "REPO_NAME"
value: "videoo-app"
}
substitutions {
key: "REVISION_ID"
value: "....aa3f5276deda3c10ac00402"
}
substitutions {
key: "SHORT_SHA"
value: "f2529c2"
}
substitutions {
key: "TRIGGER_BUILD_CONFIG_PATH"
}
substitutions {
key: "TRIGGER_NAME"
value: "rmgpgab-videoo-app-dev-europe-west1-...--storb"
}
substitutions {
key: "_DEPLOY_REGION"
value: "europe-west1"
}
substitutions {
key: "_ENTRYPOINT"
value: "gunicorn -b :$PORT videoo.wsgi"
}
substitutions {
key: "_GCR_HOSTNAME"
value: "eu.gcr.io"
}
substitutions {
key: "_LABELS"
value: "gcb-trigger-id=...-d5fd-47b7-8949-..."
}
substitutions {
key: "_PLATFORM"
value: "managed"
}
substitutions {
key: "_SERVICE_NAME"
value: "videoo-app-dev"
}
substitutions {
key: "_TRIGGER_ID"
value: "...-d5fd-47b7-8949-..."
}
The following code is not working as expected :
def sample_list_builds():
# Create a client
client = cloudbuild_v1.CloudBuildClient()
# Initialize request argument(s)
request = cloudbuild_v1.ListBuildsRequest(
project_id=settings.PROJECT_ID,
)
# Make the request
page_result = client.list_builds(request=request)
# Handle the response
for response in page_result:
print(response.status)
if response.status=="Status.SUCCESS":
print(response.results['images']['name'])
break
How can I compare the status field against Success case ?

You should have the status and status_detail in the answer.
if you import the Build type
from google.cloud.devtools.cloudbuild_v1.types import Build
you should be able to use e.g. Build.Status.SUCCESS and compare that.
As stated here, Cloud Build publishes messages on a Google Pub/Sub topic when your build's state changes. It could also be helpful to take a look at Pull subscriptions.

Related

Custom Check for GCP Cloud SQL Database Flags

I have been working with tfsec for about a week so I am still figuring things out. So far the product is pretty awesome. That being said I'm having a bit of trouble getting this custom check for Google Cloud SQL to work as expected. The goal of the check is to ensure the database flag for remote access is set to "off." The TF code below should pass the custom check, but it does not. Instead I get an error (see below):
I figured maybe I am not using subMatch/Predicatedmatch correctly, but no matter what I do the check keeps failing. There is a similar check that is included as a standard check for GCP. I ran the custom check logic through a YAML checker and it came back okay so I can rule that out any YAML specific syntax errors.
TF Code (Pass example)
resource "random_id" "db_name_suffix" {
byte_length = 4
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
name = "private-instance-${random_id.db_name_suffix.hex}"
region = "us-central1"
database_version = "SQLSERVER_2019_STANDARD"
root_password = "#######"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.private_network.id
require_ssl = true
}
backup_configuration {
enabled = true
}
password_validation_policy {
min_length = 6
reuse_interval = 2
complexity = "COMPLEXITY_DEFAULT"
disallow_username_substring = true
password_change_interval = "30s"
enable_password_policy = true
}
database_flags {
name = "contained database authentication"
value = "off"
}
database_flags {
name = "cross db ownership chaining"
value = "off"
}
database_flags {
name = "remote access"
value = "off"
}
}
}
Tfsec Custom Check:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
subMatchOne:
- name: database_flags
action: isPresent
predicateMatchSpec:
- name: name
action: equals
value: remote access
- name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp
Error Message
Error: invalid option: failed to load custom checks from ./custom_checks: Check did not pass the expected schema. yaml: unmarshal errors:
line 15: cannot unmarshal !!map into []custom.MatchSpec
I was able to get this working last night finally. This worked for me:
---
checks:
- code: SQL-01 Ensure Remote Access is disabled
description: Ensure Remote Access is disabled
impact: Prevents locally stored procedures form being run remotely
resolution: configure remote access = off
requiredTypes:
- resource
requiredLabels:
- google_sql_database_instance
severity: HIGH
matchSpec:
name: settings
action: isPresent
predicateMatchSpec:
- name: database_flags
action: isPresent
subMatch:
name: name
action: equals
value: remote access
- action: and
subMatch:
name: value
action: equals
value: off
errorMessage: DB remote access has not been disabled
relatedLinks:
- http://testcontrols.com/gcp

How to pass query parameter of BigQuery insert job in Cloud Workflow using Terraform

I encountered an error when running Cloud Workflow that's supposed to execute a parameterised query.
The Cloud Workflow error is as follow:
"message": "Query parameter 'run_dt' not found at [1:544]",
"reason": "invalidQuery"
The Terraform code that contains the workflow is like this:
resource "google_workflows_workflow" "workflow_name" {
name = "workflow"
region = "location"
description = "description"
source_contents = <<-EOF
main:
params: [input]
steps:
- init:
assign:
- project_id: ${var.project}
- location: ${var.region}
- run_dt: $${map.get(input, "run_dt")}
- runQuery:
steps:
- insert_query:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${var.project}
body:
configuration:
query:
query: ${replace(templatefile("../../bq-queries/query.sql", { "run_dt" = "input.run_dt" } ), "\n", " ")}
destinationTable:
projectId: ${var.project}
datasetId: "dataset-name"
tableId: "table-name"
create_disposition: "CREATE_IF_NEEDED"
write_disposition: "WRITE_APPEND"
allowLargeResults: true
useLegacySql: false
partitioning_field: "dt"
- the_end:
return: "SUCCESS"
EOF
}
The query in the query.sql file looks like this:
SELECT * FROM `project.dataset.table-name`
WHERE sv.dt=#run_dt
With the code above the Terraform deployment succeeded, but the workflow failed.
If i wrote "input.run_dt" without double quote, i'd encounter Terraform error:
A managed resource "input" "run_dt" has not been declared in the root module.
If i wrote it as $${input.run_dt}, i'd encounter Terraform error:
This character is not used within the language.
If i wrote it as ${input.run_dt}, i'd encounter Terraform error:
Expected the start of an expression, but found an invalid expression token.
How can I pass the query parameter of this BigQuery job in Cloud Workflow using Terraform?
Found the solution!
add queryParameters field in the subworkflow:
queryParameters:
parameterType: {"type": "DATE"}
parameterValue: {"value": '$${run_dt}'}
name: "run_dt"

How can I update a lifecycle configuration with a filter based on both prefix and multiple tags in Ruby?

I want to put a lifecycle_configuration to an S3 bucket with a rule that uses a filter with multiple tags and a prefix.
I can successfully put_lifecycle_configuration if the filter uses only one tag or one prefix, but I get a Aws::S3::Errors::MalformedXML (The XML you provided was not well-formed or did not validate against our published schema) response from AWS if I try to use an and: to combine multiple tags or a tag and a prefix.
(edit: put the prefix:... within the and: Hash per Ermiya's answer below)
What am I doing wrong?
Here is my rule:
aws_s3_backup_prefix = "production_backup" # this is fetched from ENV in real life
rule_expire_yearly_after_10y = {
id: "Expire 1 January backups after 10 years",
filter: {
and: {
prefix: aws_s3_backup_prefix,
tags: [
{ key: 'date-month-day', value: '1'},
{ key: 'date-month-num', value: '1'}
]
}
},
status: 'Enabled',
expiration: {
days: 3650
}
}
And here is how I use it to put the lifecycle configuration:
# aws_client is a valid Aws::S3::Client
# I have access to aws_s3_backup_bucket_name
# I can get and put a simple lifecycle_configuration (no 'and:') with this client and bucket
aws_client.put_bucket_lifecycle_configuration({
bucket: aws_s3_backup_bucket_name,
lifecycle_configuration: {
rules: [ rule_expire_yearly_after_10y ]
}
})
Config:
ruby 2.6.6
aws-sdk-core 3.109.1
aws-sdk-s3 1.103.0
AWS Documentation: S3 User Guide: Examples of lifecycle configuration
To specify a filter based on the key prefix and one or more tags, you need to place the prefix inside of the and element, not outside. Amazon S3 then can combine the prefix and tag filters.
This is why it's complaining about malformed XML.
This should apply the lifecycle rule to objects with a key prefix of aws_s3_backup_prefix, date-month-day tag value of 1 & date-month-num tag value of 1:
rule_expire_yearly_after_10y = {
id: "Expire 1 January backups after 10 years",
filter: {
and: {
prefix: aws_s3_backup_prefix,
tags: [
{ key: 'date-month-day', value: '1'},
{ key: 'date-month-num', value: '1'}
]
}
},
status: 'Enabled',
expiration: {
days: 3650
}
}
This bug was fixed in the very next version.
I was using aws-sdk-core 3.109.1 and this was fixed in aws-sdk-core 3.109.2

Handling nested JSON messages with AWS IoT Core rules and AWS Lambda

We are using an AWS IoT Rule to forward all messages from things to a Lambda function and appending some properties on the way, using this query:
SELECT *, topic(2) AS DeviceId, timestamp() AS RoutedAt FROM 'devices/+/message'
The message sent to the topic is a nested JSON:
{
version: 1,
type: "string",
payload: {
property1: "foo",
nestedPayload: {
nestedProperty: "bar"
}
}
}
When we use the same query for another rule and route the messages into an S3 bucket instead of a Lambda, the resulting JSON files in the bucket are as expected:
{
DeviceId: "test",
RoutedAt:1618311374770,
version: 1,
type: "string",
payload: {
property1: "foo",
nestedPayload: {
nestedProperty: "bar"
}
}
}
But when routing into a lambda function, the properties of the "nestedPayload" are pulled up one level:
{
DeviceId: "test",
RoutedAt:1618311374770,
version: 1,
type: "string",
payload: {
property1: "foo",
nestedProperty: "bar"
}
}
However, when debugging the Lambda locally using VS Code, providing a JSON file (in other words: not connecting to AWS IoT Core), the JSON structure is as expected, which is why I am assuming the error is not with the JSON serializer / deserializer, but with the rule.
Did anyone experience the same issue?
It turns out, the issue was with the SQL version of the rule.
We created the rule routing to the Lambda using CDK, which by default set the version to "2015-10-08". The rule routing to S3, which didn't show the error, was created manually and used version "2016-03-23". Updating the rule routing to the Lambda to also use "2016-03-23" fixed the issue.

Two Step Build Process Jenkins

I am creating a Cloudfront Service for my organization. I am trying to create a job where a user can execute a Jenkins Job to update a distribution.
I would like the ability for the user to input a Distribution ID and then have Jenkins Auto-Fill a secondary set of parameters. Jenkins would need to grab the configuration for that Distribution (via Groovy or other means) to do that auto-fill. The user then would select which configuration options they would like to change and hit submit. The job would then make the requested updates (via a python script).
Can this be done through some combination of plugins(or any other means?)
// the first input requests the DistributionID from a user
stage 'Input Distribution ID'
def distributionId = input(
id: 'distributionId', message: "Cloudfront Distribution ID", parameters: [
[$class: 'TextParameterDefinition',
description: 'Distribution ID', name: 'DistributionID'],
])
echo ("using DistributionID=" + distributionId)
// Second
// Sample data - you'd need to get the real data from somewhere here
// assume data will be in distributionData after this
def map = [
"1": [ name: "1", data: "data_1"],
"2": [ name: "2", data: "data_2"],
"other": [ name: "other", data: "data_other"]
]
def distributionData;
if(distributionId in map.keySet()) {
distributionData = map[distributionId]
} else {
distributionData = map["other"]
}
// The third stage uses the gathered data, puts these into default values
// and requests another user input.
// The user now has the choice of altering the values or leave them as-is.
stage 'Configure Distribution'
def userInput = input(
id: 'userInput', message: 'Change Config', parameters: [
[$class: 'TextParameterDefinition', defaultValue: distributionData.name,
description: 'Name', name: 'name'],
[$class: 'TextParameterDefinition', defaultValue: distributionData.data,
description: 'Data', name: 'data']
])
// Fourth - Now, here's the actual code to alter the Cloudfront Distribution
echo ("Name=" + userInput['name'])
echo ("Data=" + userInput['data'])
Create a new pipeline and copy/paste this into the pipeline script section
Play around with it
I can easily imagine this code could be implemented in a much better way, but at least, it's a start.