How do I modify an AWS Step function? - amazon-web-services

From the AWS console it seems like AWS Step functions are immutable. Is there a way to modify it ? If not how does the version control work ? Do I have to create a new State machine every time I have to make incremental changes to the state machine ?

As per this forum entry, there is no way yet to modify an existing state machine. You need to create a new one every time.

At the moment you can edit state mashine. Buttton "Edit state machine" in right upper corner

These days, I have been using CloudFormation w/ boto3 . Going to just write it out here, because I had been a bit intimidated by CloudFormation in the past, but with an end to end example maybe it is more approachable.
step_function_stack.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: >-
A description of the State Machine goes here.
Resources:
MyStateMachineName:
Type: AWS::StepFunctions::StateMachine
Properties:
RoleArn: "arn:aws:iam::{{aws_account_id}}:role/service-role/StepFunctions-MyStepFunctionRole"
StateMachineName: "MyStateMachineName"
StateMachineType: "EXPRESS"
DefinitionString:
Fn::Sub: |
{{full_json_definition}}
manage_step_functions.py
import boto3
import os
import time
from jinja2 import Environment
def do_render(full_json_definition):
with open('step_function_stack.yaml') as fd:
template = fd.read()
yaml = Environment().from_string(template).render(
full_json_definition=full_json_definition,
aws_account_id=os.getenv('AWS_ACCOUNT_ID'))
return yaml
def update_step_function(stack_name, full_json_definition,):
yaml = do_render(full_json_definition)
client = boto3.client('cloudformation')
response = client.update_stack(
StackName=stack_name,
TemplateBody=yaml,
Capabilities=[
'CAPABILITY_AUTO_EXPAND',
])
return response
def create_step_function(stack_name, full_json_definition,):
yaml = do_render(full_json_definition)
client = boto3.client('cloudformation')
response = client.update_stack(
StackName=stack_name,
TemplateBody=yaml,
Capabilities=[
'CAPABILITY_AUTO_EXPAND',
])
return response
def get_lambdas_stack_latest_events(stack_name):
# Get the first 100 most recent events.
client = boto3.client('cloudformation')
return client.describe_stack_events(
StackName=stack_name)
def wait_on_update(stack_name):
events = None
while events is None or events['StackEvents'][0]['ResourceStatus'] not in ['UPDATE_COMPLETE',
'UPDATE_ROLLBACK_COMPLETE', 'DELETE_COMPLETE', 'CREATE_COMPLETE']:
print(events['StackEvents'][0]['ResourceStatus'] if events else ...)
events = get_lambdas_stack_latest_events(stack_name)
time.sleep(1)
return events
step_function_definition.json
{
"Comment": "This is a Hello World State Machine from https://docs.aws.amazon.com/step-functions/latest/dg/getting-started.html#create-state-machine",
"StartAt": "Hello",
"States": {
"Hello": {
"Type": "Pass",
"Result": "Hello",
"Next": "World"
},
"World": {
"Type": "Pass",
"Result": "World",
"End": true
}
}
}
Create a step function
# From a python shell for example
# First just set any privileged variables through environmental variables so they are not checked into code
# export AWS_ACCOUNT_ID=999999999
# edit step_function_definition.json then read it
with open('step_function_definition.json') as fd:
step_function_definition = fd.read()
import manage_step_functions as msf
stack_name = 'MyGloriousStepFuncStack'
msf.create_step_function(stack_name, step_function_definition)
If you are ready to update your State Machine, you can edit step_function_definition.json or you might create a new file for reference, step_function_definition-2021-01-29.json. (Because at time of this writing Step Functions dont have versions like Lambda for instance).
import manage_step_functions as msf
stack_name = 'MyGloriousStepFuncStack'
with open('step_function_definition-2021-01-29.json') as fd:
step_function_definition = fd.read()
msf.update_step_function(stack_name, step_function_definition)

Related

How to read SSM Parameter dynamically from Lambda Environment variable

I am keeping the application endpoint in SSM parameter store and able to access from Lambda environment .
Resources:
M4IAcarsScheduler:
Type: AWS::Serverless::Function
Properties:
Handler: not.used.in.provided.runtime
Runtime: provided
CodeUri: target/function.zip
MemorySize: 512
Timeout: 900
FunctionName: Sample
Environment:
Variables:
SamplePath: !Ref sample1path
SampleId: !Ref sample1pathid
Parameters:
sample1path:
Type: AWS::SSM::Parameter::Value<String>
Description: Select existing security group for lambda function from Parameter Store
Default: /sample/path
sample1pathid:
Type: AWS::SSM::Parameter::Value<String>
Description: Select existing security group for lambda function from Parameter Store
Default: /sample/id
My issue is while I am updating the SSM parameter, the Lambda Env. is not update dynamically, and every time I need to restart.
Is there any way I can handle it dynamically, meaning that when it changes in SSM parameter Store, it'll reflect without restart of Lambda?
By using SSM parameters in a CloudFormation stack, the parameters get resolved when the CloudFormation stack is deployed. If the value in SSM subsequently changes, there is nothing to update the lambda, so the lambda will still have the value that was pulled from SSM at the moment the CloudFormation stack deployed. The lambda will not even know that the parameter came from SSM; rather, it will only know that there there is a static environment variable configured.
Instead, to use SSM Parameters in your lambda you should change your lambda code so that it fetches the parameter from inside the code. This AWS blog shows a Python lambda example of how to fetch the parameters from the lambda code (when the lambda runs):
import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()
# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None
class MyApp:
def __init__(self, config):
"""
Construct new MyApp with configuration
:param config: application configuration
"""
self.config = config
def get_config(self):
return self.config
def load_config(ssm_parameter_path):
"""
Load configparser from config stored in SSM Parameter Store
:param ssm_parameter_path: Path to app config in SSM Parameter Store
:return: ConfigParser holding loaded config
"""
configuration = configparser.ConfigParser()
try:
# Get all parameters for this app
param_details = client.get_parameters_by_path(
Path=ssm_parameter_path,
Recursive=False,
WithDecryption=True
)
# Loop through the returned parameters and populate the ConfigParser
if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
for param in param_details.get('Parameters'):
param_path_array = param.get('Name').split("/")
section_position = len(param_path_array) - 1
section_name = param_path_array[section_position]
config_values = json.loads(param.get('Value'))
config_dict = {section_name: config_values}
print("Found configuration: " + str(config_dict))
configuration.read_dict(config_dict)
except:
print("Encountered an error loading config from SSM.")
traceback.print_exc()
finally:
return configuration
def lambda_handler(event, context):
global app
# Initialize app if it doesn't yet exist
if app is None:
print("Loading config and creating new MyApp...")
config = load_config(full_config_path)
app = MyApp(config)
return "MyApp config is " + str(app.get_config()._sections)
Here is a post with an example in Node, and similar examples exist for other languages too.
// parameter expected by SSM.getParameter
var parameter = {
"Name" : "/systems/"+event.Name+"/config"
};
responseFromSSM = await SSM.getParameter(parameter).promise();
console.log('SUCCESS');
console.log(responseFromSSM);
var value = responseFromSSM.Parameter.Value;

Lambda Function working, but cannot work with API Gateway

I have a working Lambda function when I test it using a test event:
{
"num1_in": 51.5,
"num2_in": -0.097
}
import json
import Function_and_Data_List
#Parse out query string parameters
def lambda_handler(event, context):
num1_in = event['num1_in']
num2_in = event['num2_in']
coord = {'num1': num1_in, 'num2': num2_in}
output = func1(Function_and_Data_List.listdata, coord)
return {
"Output": output
}
However, when I use API gateway to create a REST API I keep getting errors. My method for the REST API are:
1.) Build REST API
2.) Actions -> Create Resource
3.) Actions -> Create Method -> GET
4.) Integration type is Lambda Function, Use Lambda Proxy Integration
5.) Deploy
What am I missing for getting this API to work?
If you use lambda proxy integration, your playload will be in the body. You seem also having incorrect return format.
Therefore, I would recommend trying out the following version of your code:
import json
import Function_and_Data_List
#Parse out query string parameters
def lambda_handler(event, context):
print(event)
body = json.loads(event['body'])
num1_in = body['num1_in']
num2_in = body['num2_in']
coord = {'num1': num1_in, 'num2': num2_in}
output = func1(Function_and_Data_List.listdata, coord)
return {
"statusCode": 200,
"body": json.dumps(output)
}
In the above I also added print(event) so that in the CloudWatch Logs you can inspect the event object which should help debug the issue.

How do I pass variables through AWS Codepipeline?

AWS CodePipeline orchestrates first lambda-A and then lambda-B and i want to pass a variable from my lambda-A to my lambda-B.
In lambda-A i set the outputVariables when setting the job to success:
boto3.client("codepipeline").put_job_success_result(
jobId=event["CodePipeline.job"]["id"],
outputVariables={"FOO":"BAR"}
)
From the documentation i know that outputVariables are Key-value pairs that can be made available to a downstream action.
CodePipeline then triggers lambda-B. How can i retrieve in lambda-B the variables i have set in the outputVariables in lambda-A?
In Lambda-B's action configuration, in User parameters, enter the variable syntax to ingest the variable created in earlier action using this syntax:
#{outputVariables.FOO}
Then you can unpack the 'UserParameters' in Lambda function:
{
"CodePipeline.job": {
"id": "EXAMPLE-e08a-4f06-b9ba-EXAMPLE",
"accountId": "EXAMPLE87397",
"data": {
"actionConfiguration": {
"configuration": {
"FunctionName": "LambdaForCP-Python",
"UserParameters": "5e2591fd79889dEXAMPLE5f33e2"
}
},
from 'event':
def lambda_handler(event, context):
print(event)
This procedure is detailed in Step (f) here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-lambda-variables.html#lambda-variables-pipeline

Start/Stop Google Cloud SQL instances using Cloud Functions

I am very new to Google Cloud Platform. I am looking for ways to automate starting and stopping a mySQL instance at a predefined time.
I found that we could create a cloud function to start/stop an instance and then use the cloud scheduler to trigger this. However, I am not able to understand how this works.
I used the code that I found in GitHub.
https://github.com/chris32g/Google-Cloud-Support/blob/master/Cloud%20Functions/turn_on_cloudSQL_instance
https://github.com/chris32g/Google-Cloud-Support/blob/master/Cloud%20Functions/turn_off_CloudSQL_instance
However, I am not familiar with any of the programming languages like node, python or go. That was the reason for the confusion. Below is the code that I found on GitHub to Turn On a Cloud SQL instance:
# This file uses the Cloud SQL API to turn on a Cloud SQL instance.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('sqladmin', 'v1beta4', credentials=credentials)
project = 'wave24-gonchristian' # TODO: Update placeholder value.
def hello_world(request):
instance = 'test' # TODO: Update placeholder value.
request = service.instances().get(project=project, instance=instance)
response = request.execute()
j = response["settings"]
settingsVersion = int(j["settingsVersion"])
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
request = service.instances().update(
project=project,
instance=instance,
body=dbinstancebody)
response = request.execute()
# pprint(response)
request_json = request.get_json()
if request.args and 'message' in request.args:
return request.args.get('message')
elif request_json and 'message' in request_json:
return request_json['message']
else:
return f"Hello World!"
________________________
requirements.txt
google-api-python-client==1.7.8
google-auth-httplib2==0.0.3
google-auth==1.6.2
oauth2client==4.1.3
As I mentioned earlier, I am not familiar with Python. I just found this code on GitHub. I was trying to understand what this specific part does:
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-n1-standard-1",
"activationPolicy": "Always"
}
}
The code block above specifies sql instance properties you would like to update, amongst which the most relevant for your case is activationPolicy which allows you to stop / start sql instance.
For Second Generation instances, the activation policy is used only to start or stop the instance. You change the activation policy by starting and stopping the instance. Stopping the instance prevents further instance charges.
Activation policy can have two values Always or Never. Always will start the instance and Never will stop the instance.
You can use the API to amend the activationPolicy to "NEVER" to stop the server or "ALWAYS" to start it.
# PATCH
https://sqladmin.googleapis.com/sql/v1beta4/projects/{project}/instances/{instance}
# BODY
{
"settings": {
"activationPolicy": "NEVER"
}
}
See this article in the Cloud SQL docs for more info: Starting, stopping, and restarting instances. You can also try out the instances.patch method in the REST API reference.
please try the code below :
from pprint import pprint
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import os
credentials = GoogleCredentials.get_application_default()
service = discovery.build("sqladmin", "v1beta4", credentials=credentials)
project_id = os.environ.get("GCP_PROJECT")
# setup this vars using terraform and assign the value via terraform
desired_policy = os.environ.get("DESIRED_POLICY") # ALWAYS or NEVER
instance_name = os.environ.get("INSTANCE_NAME")
def cloudsql(request):
request = service.instances().get(project=project_id, instance=instance_name)
response = request.execute()
state = response["state"]
instance_state = str(state)
x = response["settings"]
current_policy = str(x["activationPolicy"])
dbinstancebody = {"settings": {"activationPolicy": desired_policy}}
if instance_state != "RUNNABLE":
print("Instance is not in RUNNABLE STATE")
else:
if desired_policy != current_policy:
request = service.instances().patch(
project=project_id, instance=instance_name, body=dbinstancebody
)
response = request.execute()
pprint(response)
else:
print(f"Instance is in RUNNABLE STATE but is also already configured with the desired policy: {desired_policy}")
In my repo you can have more information on how to setup the cloud function using Terraform. This cloud function is intended to do what you want but it is using environment variables, if you dont want to use them, just change the variables values on the python code.
Here is my repository Repo

Two Step Build Process Jenkins

I am creating a Cloudfront Service for my organization. I am trying to create a job where a user can execute a Jenkins Job to update a distribution.
I would like the ability for the user to input a Distribution ID and then have Jenkins Auto-Fill a secondary set of parameters. Jenkins would need to grab the configuration for that Distribution (via Groovy or other means) to do that auto-fill. The user then would select which configuration options they would like to change and hit submit. The job would then make the requested updates (via a python script).
Can this be done through some combination of plugins(or any other means?)
// the first input requests the DistributionID from a user
stage 'Input Distribution ID'
def distributionId = input(
id: 'distributionId', message: "Cloudfront Distribution ID", parameters: [
[$class: 'TextParameterDefinition',
description: 'Distribution ID', name: 'DistributionID'],
])
echo ("using DistributionID=" + distributionId)
// Second
// Sample data - you'd need to get the real data from somewhere here
// assume data will be in distributionData after this
def map = [
"1": [ name: "1", data: "data_1"],
"2": [ name: "2", data: "data_2"],
"other": [ name: "other", data: "data_other"]
]
def distributionData;
if(distributionId in map.keySet()) {
distributionData = map[distributionId]
} else {
distributionData = map["other"]
}
// The third stage uses the gathered data, puts these into default values
// and requests another user input.
// The user now has the choice of altering the values or leave them as-is.
stage 'Configure Distribution'
def userInput = input(
id: 'userInput', message: 'Change Config', parameters: [
[$class: 'TextParameterDefinition', defaultValue: distributionData.name,
description: 'Name', name: 'name'],
[$class: 'TextParameterDefinition', defaultValue: distributionData.data,
description: 'Data', name: 'data']
])
// Fourth - Now, here's the actual code to alter the Cloudfront Distribution
echo ("Name=" + userInput['name'])
echo ("Data=" + userInput['data'])
Create a new pipeline and copy/paste this into the pipeline script section
Play around with it
I can easily imagine this code could be implemented in a much better way, but at least, it's a start.