Two Step Build Process Jenkins - amazon-web-services

I am creating a Cloudfront Service for my organization. I am trying to create a job where a user can execute a Jenkins Job to update a distribution.
I would like the ability for the user to input a Distribution ID and then have Jenkins Auto-Fill a secondary set of parameters. Jenkins would need to grab the configuration for that Distribution (via Groovy or other means) to do that auto-fill. The user then would select which configuration options they would like to change and hit submit. The job would then make the requested updates (via a python script).
Can this be done through some combination of plugins(or any other means?)

// the first input requests the DistributionID from a user
stage 'Input Distribution ID'
def distributionId = input(
id: 'distributionId', message: "Cloudfront Distribution ID", parameters: [
[$class: 'TextParameterDefinition',
description: 'Distribution ID', name: 'DistributionID'],
])
echo ("using DistributionID=" + distributionId)
// Second
// Sample data - you'd need to get the real data from somewhere here
// assume data will be in distributionData after this
def map = [
"1": [ name: "1", data: "data_1"],
"2": [ name: "2", data: "data_2"],
"other": [ name: "other", data: "data_other"]
]
def distributionData;
if(distributionId in map.keySet()) {
distributionData = map[distributionId]
} else {
distributionData = map["other"]
}
// The third stage uses the gathered data, puts these into default values
// and requests another user input.
// The user now has the choice of altering the values or leave them as-is.
stage 'Configure Distribution'
def userInput = input(
id: 'userInput', message: 'Change Config', parameters: [
[$class: 'TextParameterDefinition', defaultValue: distributionData.name,
description: 'Name', name: 'name'],
[$class: 'TextParameterDefinition', defaultValue: distributionData.data,
description: 'Data', name: 'data']
])
// Fourth - Now, here's the actual code to alter the Cloudfront Distribution
echo ("Name=" + userInput['name'])
echo ("Data=" + userInput['data'])
Create a new pipeline and copy/paste this into the pipeline script section
Play around with it
I can easily imagine this code could be implemented in a much better way, but at least, it's a start.

Related

AWS Systems Manager is not resolving Automation Variable

I have a simple aws Systems Manager Automation that is designed to rotate the local windows password for systems located at externalized sites. During Step 7 of the automation, AWS calls and executes a powershell command document that validates that the rotated password and outputs a string value of either True or False in JSON format. This string value is then passed back into the automation and sent to cloudwatch.
I am having an issue where the True or False value passed into the automation in step 7 via the validPassword variable is not getting resolved when passed into Step 8. Instead, only the Automation variable identifier is passed ({{CheckNewPassword.validPassword}})".
Does anyone know why this is this happening? I assume it has something to do with the command document not producing output in a format that Systems Manager likes.
Any assistance would be appreciated
Step 7 Output
{
"validPassword": "True"
}
{"Status":"Success","ResponseCode":0,"Output":"{
\"validPassword\": \"True\"
}
","CommandId":"6419ba15-b0f3-4af4-86a2-c4693639fc9e"}
Step 8 Input Passed from Step 7
{"passwordValid":"{{CheckNewPassword.validPassword}}","siteCode":"LBZ1-20","num_failedValidation":1}
AWS Automation Document -- Step 7 and 8
- name: CheckNewPassword
action: 'aws:runCommand'
inputs:
DocumentName: SPIN_CheckPass
InstanceIds:
- '{{nodeID}}'
Parameters:
password:
- '{{GenerateNewPassword.newPassword}}'
outputs:
- Name: validPassword
Selector: validPassword
Type: String
- Name: dataType
Selector: dataType
Type: String
- name: RecordPasswordStatus
action: 'aws:invokeLambdaFunction'
inputs:
InvocationType: RequestResponse
FunctionName: SPIN-CheckPassMetric
InputPayload:
passwordValid: '{{CheckNewPassword.validPassword}}'
siteCode: '{{siteCode}}'
num_failedValidation: 1
AWS Command Document (SPIN_CheckPass)
{
"schemaVersion": "2.2",
"description": "Check Rotated Password",
"parameters": {
"password": {
"type": "String",
"description": "The new password used in the password rotation."
}
},
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "rotatePassword",
"inputs": {
"runCommand": [
"function checkPass {",
" param (",
" $password",
" )",
" $username = 'admin'",
" $password = $password",
" $computer = $env:COMPUTERNAME",
" Add-Type -AssemblyName System.DirectoryServices.AccountManagement",
" $obj = New-Object System.DirectoryServices.AccountManagement.PrincipalContext('machine',$computer)",
" [String] $result = $obj.ValidateCredentials($username, $password)",
"",
" $json = ",
" #{",
" validPassword = $result",
" } | ConvertTo-Json",
"",
"return $json",
"}",
"checkPass('{{password}}')"
],
"runAsElevated": true
}
}
]
}
I've tried changing the datatype of the validPassword variable to a bool, and Ive tried changing the format of the command document from JSON to YAML both of which have not worked.
Ive also attempted to capture another output element from the command document into a variable which also results in the variable name not resulting in inputs for subsequent Steps.
AWS Support confirmed that this is a bug they are now tracking.

How to pass query parameter of BigQuery insert job in Cloud Workflow using Terraform

I encountered an error when running Cloud Workflow that's supposed to execute a parameterised query.
The Cloud Workflow error is as follow:
"message": "Query parameter 'run_dt' not found at [1:544]",
"reason": "invalidQuery"
The Terraform code that contains the workflow is like this:
resource "google_workflows_workflow" "workflow_name" {
name = "workflow"
region = "location"
description = "description"
source_contents = <<-EOF
main:
params: [input]
steps:
- init:
assign:
- project_id: ${var.project}
- location: ${var.region}
- run_dt: $${map.get(input, "run_dt")}
- runQuery:
steps:
- insert_query:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${var.project}
body:
configuration:
query:
query: ${replace(templatefile("../../bq-queries/query.sql", { "run_dt" = "input.run_dt" } ), "\n", " ")}
destinationTable:
projectId: ${var.project}
datasetId: "dataset-name"
tableId: "table-name"
create_disposition: "CREATE_IF_NEEDED"
write_disposition: "WRITE_APPEND"
allowLargeResults: true
useLegacySql: false
partitioning_field: "dt"
- the_end:
return: "SUCCESS"
EOF
}
The query in the query.sql file looks like this:
SELECT * FROM `project.dataset.table-name`
WHERE sv.dt=#run_dt
With the code above the Terraform deployment succeeded, but the workflow failed.
If i wrote "input.run_dt" without double quote, i'd encounter Terraform error:
A managed resource "input" "run_dt" has not been declared in the root module.
If i wrote it as $${input.run_dt}, i'd encounter Terraform error:
This character is not used within the language.
If i wrote it as ${input.run_dt}, i'd encounter Terraform error:
Expected the start of an expression, but found an invalid expression token.
How can I pass the query parameter of this BigQuery job in Cloud Workflow using Terraform?
Found the solution!
add queryParameters field in the subworkflow:
queryParameters:
parameterType: {"type": "DATE"}
parameterValue: {"value": '$${run_dt}'}
name: "run_dt"

How can I update a lifecycle configuration with a filter based on both prefix and multiple tags in Ruby?

I want to put a lifecycle_configuration to an S3 bucket with a rule that uses a filter with multiple tags and a prefix.
I can successfully put_lifecycle_configuration if the filter uses only one tag or one prefix, but I get a Aws::S3::Errors::MalformedXML (The XML you provided was not well-formed or did not validate against our published schema) response from AWS if I try to use an and: to combine multiple tags or a tag and a prefix.
(edit: put the prefix:... within the and: Hash per Ermiya's answer below)
What am I doing wrong?
Here is my rule:
aws_s3_backup_prefix = "production_backup" # this is fetched from ENV in real life
rule_expire_yearly_after_10y = {
id: "Expire 1 January backups after 10 years",
filter: {
and: {
prefix: aws_s3_backup_prefix,
tags: [
{ key: 'date-month-day', value: '1'},
{ key: 'date-month-num', value: '1'}
]
}
},
status: 'Enabled',
expiration: {
days: 3650
}
}
And here is how I use it to put the lifecycle configuration:
# aws_client is a valid Aws::S3::Client
# I have access to aws_s3_backup_bucket_name
# I can get and put a simple lifecycle_configuration (no 'and:') with this client and bucket
aws_client.put_bucket_lifecycle_configuration({
bucket: aws_s3_backup_bucket_name,
lifecycle_configuration: {
rules: [ rule_expire_yearly_after_10y ]
}
})
Config:
ruby 2.6.6
aws-sdk-core 3.109.1
aws-sdk-s3 1.103.0
AWS Documentation: S3 User Guide: Examples of lifecycle configuration
To specify a filter based on the key prefix and one or more tags, you need to place the prefix inside of the and element, not outside. Amazon S3 then can combine the prefix and tag filters.
This is why it's complaining about malformed XML.
This should apply the lifecycle rule to objects with a key prefix of aws_s3_backup_prefix, date-month-day tag value of 1 & date-month-num tag value of 1:
rule_expire_yearly_after_10y = {
id: "Expire 1 January backups after 10 years",
filter: {
and: {
prefix: aws_s3_backup_prefix,
tags: [
{ key: 'date-month-day', value: '1'},
{ key: 'date-month-num', value: '1'}
]
}
},
status: 'Enabled',
expiration: {
days: 3650
}
}
This bug was fixed in the very next version.
I was using aws-sdk-core 3.109.1 and this was fixed in aws-sdk-core 3.109.2

How to ingest variable data like passwords into compute instance when deploying from template

We are trying to figure out how we can create a compute engine template and set some information like passwords with the help of variables in the moment when the final instance is generated by deployment manager, not in the base image.
When deploying something from marketplace you can see that passwords are generated by "password.py" and stored as metadata in the VMs template. But i can't find the code that writes this data into the VMs disk image.
Could someone explain how this can be achieved?
Edit:
I found out that startup scripts are able to read the instance's metadata: https://cloud.google.com/compute/docs/storing-retrieving-metadata Is this how they do it in marketplace click-to-deploy scripts like https://console.cloud.google.com/marketplace/details/click-to-deploy-images/wordpress ? Or is there an even better way to accomplish this?
The best way is to use the metadata server.
In a star-up script, use this to recover all the attributes of your VM.
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetada
ta/v1/instance/attributes/"
Then, do what you want with it
Don't forget to delete secret from metadata after their use. Or change them on the compute. Secrets must be stay secret.
By the way, I would to recommand you to have a look to another things: berglas. Berglas is made by a Google Developer Advocate, specialized in security: Seth Vargo. In summary the principle:
Bootstrap a bucket with Berglas
Create a secret in this Bucket ith Berglas
Pass the reference to this secret in your compute Metadata (berglas://<my_bucket>/<my secret name>)
Use berglas in start up script to resolve secret.
All this action are possible in command line, thus an integration in a script is possible.
You can use python templates , this give you more flexibility. In your YAML you can call the python script to fill the necessary information, from documentation:
imports:
- path: vm-template.py
resources:
- name: vm-1
type: vm-template.py
- name: a-new-network
type: compute.v1.network
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: true
Where vm-template.py it's a python script:
"""Creates the virtual machine."""
COMPUTE_URL_BASE = 'https://www.googleapis.com/compute/v1/'
def GenerateConfig(unused_context):
"""Creates the first virtual machine."""
resources = [{
'name': 'the-first-vm',
'type': 'compute.v1.instance',
'properties': {
'zone': 'us-central1-f',
'machineType': ''.join([COMPUTE_URL_BASE, 'projects/[MY_PROJECT]',
'/zones/us-central1-f/',
'machineTypes/f1-micro']),
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': ''.join([COMPUTE_URL_BASE, 'projects/',
'debian-cloud/global/',
'images/family/debian-9'])
}
}],
'networkInterfaces': [{
'network': '$(ref.a-new-network.selfLink)',
'accessConfigs': [{
'name': 'External NAT',
'type': 'ONE_TO_ONE_NAT'
}]
}]
}
}]
return {'resources': resources}
Now for the password it depends which VM you are using, Windows or Linux.
Linux you can add a startup script which inject a ssh public key.
Windows you can first prepare the proper key, see this Automate password generation

How do I modify an AWS Step function?

From the AWS console it seems like AWS Step functions are immutable. Is there a way to modify it ? If not how does the version control work ? Do I have to create a new State machine every time I have to make incremental changes to the state machine ?
As per this forum entry, there is no way yet to modify an existing state machine. You need to create a new one every time.
At the moment you can edit state mashine. Buttton "Edit state machine" in right upper corner
These days, I have been using CloudFormation w/ boto3 . Going to just write it out here, because I had been a bit intimidated by CloudFormation in the past, but with an end to end example maybe it is more approachable.
step_function_stack.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: >-
A description of the State Machine goes here.
Resources:
MyStateMachineName:
Type: AWS::StepFunctions::StateMachine
Properties:
RoleArn: "arn:aws:iam::{{aws_account_id}}:role/service-role/StepFunctions-MyStepFunctionRole"
StateMachineName: "MyStateMachineName"
StateMachineType: "EXPRESS"
DefinitionString:
Fn::Sub: |
{{full_json_definition}}
manage_step_functions.py
import boto3
import os
import time
from jinja2 import Environment
def do_render(full_json_definition):
with open('step_function_stack.yaml') as fd:
template = fd.read()
yaml = Environment().from_string(template).render(
full_json_definition=full_json_definition,
aws_account_id=os.getenv('AWS_ACCOUNT_ID'))
return yaml
def update_step_function(stack_name, full_json_definition,):
yaml = do_render(full_json_definition)
client = boto3.client('cloudformation')
response = client.update_stack(
StackName=stack_name,
TemplateBody=yaml,
Capabilities=[
'CAPABILITY_AUTO_EXPAND',
])
return response
def create_step_function(stack_name, full_json_definition,):
yaml = do_render(full_json_definition)
client = boto3.client('cloudformation')
response = client.update_stack(
StackName=stack_name,
TemplateBody=yaml,
Capabilities=[
'CAPABILITY_AUTO_EXPAND',
])
return response
def get_lambdas_stack_latest_events(stack_name):
# Get the first 100 most recent events.
client = boto3.client('cloudformation')
return client.describe_stack_events(
StackName=stack_name)
def wait_on_update(stack_name):
events = None
while events is None or events['StackEvents'][0]['ResourceStatus'] not in ['UPDATE_COMPLETE',
'UPDATE_ROLLBACK_COMPLETE', 'DELETE_COMPLETE', 'CREATE_COMPLETE']:
print(events['StackEvents'][0]['ResourceStatus'] if events else ...)
events = get_lambdas_stack_latest_events(stack_name)
time.sleep(1)
return events
step_function_definition.json
{
"Comment": "This is a Hello World State Machine from https://docs.aws.amazon.com/step-functions/latest/dg/getting-started.html#create-state-machine",
"StartAt": "Hello",
"States": {
"Hello": {
"Type": "Pass",
"Result": "Hello",
"Next": "World"
},
"World": {
"Type": "Pass",
"Result": "World",
"End": true
}
}
}
Create a step function
# From a python shell for example
# First just set any privileged variables through environmental variables so they are not checked into code
# export AWS_ACCOUNT_ID=999999999
# edit step_function_definition.json then read it
with open('step_function_definition.json') as fd:
step_function_definition = fd.read()
import manage_step_functions as msf
stack_name = 'MyGloriousStepFuncStack'
msf.create_step_function(stack_name, step_function_definition)
If you are ready to update your State Machine, you can edit step_function_definition.json or you might create a new file for reference, step_function_definition-2021-01-29.json. (Because at time of this writing Step Functions dont have versions like Lambda for instance).
import manage_step_functions as msf
stack_name = 'MyGloriousStepFuncStack'
with open('step_function_definition-2021-01-29.json') as fd:
step_function_definition = fd.read()
msf.update_step_function(stack_name, step_function_definition)