How do I retrieve AWS Batch job parameters? - aws-batch

How do I retrieve parameters from an AWS Batch job request? Suppose I have a job submitter app that sends a job request with the following code (in C#):
SubmitJobRequest submitJobRequest = new SubmitJobRequest()
{
JobName = "MyJobName",
JobQueue = "MyJobQueue",
JobDefinition = "MyJobDefinition:1",
Parameters = new Dictionary<string, string>() { {"Foo", "Bar" } },
};
SubmitJobResponse submitJobResponse = AWSBatchClient.SubmitJob(submitJobRequest);
What I want to be able to do now is retrieve what's in the Parameters field in submitJobRequest in my docker app that gets launched. How do I do that? It's not passed in as program args, as I've tested that (the only args I see are those were statically defined for 'Command' my job definition). I know that I can set environment variables via container overrides and then retrieve them via Environment.GetEnvironmentVariable (in C#). But I don't know how to get the parameters. Thanks.

Here is an example using yaml cloudformation.(or you can use json for same properties).You can declare the parameters using a Ref in the command section. I am using user_name but you can add more. We might have a limit of 30KB payload
ContainerProperties:
Command:
- "python"
- "Your command here"
- "--user_name"
- "Ref::user_name"
Now you can submit your job to the queue like this. I am using python and boto3 client:
# Submit the job
job1 = client.submit_job(
jobName=jobName,
jobQueue=jobQueue,
jobDefinition=jobDefinition,
parameters={
'user_name':user_name
}
)
To retrieve the parameters use this(I am using argparse):
parser = argparse.ArgumentParser(description='AWS Driver Batch Job Runner')
parser.add_argument('--user_name', dest='user_name', required=True)
args = parser.parse_args()
print(args.user_name)

Found the answer I was looking for. I just had to add a ref to the parameter for Command in the job definition. In the question example, I would've needed to specify Ref::Foo for Command in the job definition and then "Bar" would've gotten passed as program args to my container app.
To expand on my example, in my specific case, my program uses the CommandLineParser package for passing parameters. Suppose one of the CommandLine options is called Foo. If I were running the program from a command line, I'd set a value for Foo with something like "--Foo Bar". To effectively do the same for my batch job, in my job definition, for Command, I would specify "--Foo Ref::Foo" (without quotes). Then for the Parameters field in my SubmitJobRequest object, I would set Foo exactly as per my original example and then my batch program would see "Bar" for the Foo CommandLine option (just like as if it was run with "--Foo Bar"). Hope that helps.

Related

how to get list of arguments to handler in delayed_job rails

I have a list of all the scheduled jobs which I can get using the command
Delayed::Job.all
Every job has a handler field(string) which contains a '-' separated arguments. I want to find one of the arguments of this string. One way is obviously to split the string and extract the value but this method will fail if there is ever any change in the list of arguments passed.
Below given is the handler string of one of my job objects:
"--- !ruby/object:ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper\njob_data:\n job_class: ActionMailer::DeliveryJob\n job_id: 7ce42882-de24-439a-a52a-5681453f4213\n queue_name: mailers\n arguments:\n - EventNotifications\n - reminder_webinar_event_registration\n - deliver_now\n - mail#gmail.com\n - yesha\n - 89\n locale: :en\n"
I want to know if there is any way, I can send extra arguments to job object while saving it which can be used later instead of searching in the handler string.
Or, if not this, can i get a list of arguments of handler rather than parsing the string and using it.
Kindly help!
There is a method instance_object for Delayed::Job instances which returns the deserialized handler
job = Delayed::Job.first
handler = job.payload_object
You can use your handler as needed, such as handler.method
To access the job data:
data = job.payload_object.job_data
To then return the actual job class that was queued, you deserialize the job data:
obj = ActiveJob::Base.deserialize(data)
If your job is a mailer and you want to access the parameters to your mailer, then this is where things get a bit hacky and I'm unsure if there's a better way. The following will return all of the data for the mailer as an array containing the mailer class, method names, and arguments.
mailer_args = obj.instance_variable_get :#serialized_arguments
Finally, you can deserialize all of the mailer arguments with the following which will contain the same data as mailer_args, but with any ActiveRecord objects deserialized (with the form gid://...) to the actual instances passed to the mailer.
ActiveJob::Arguments.deserialize(mailer_args)

Set or modify an AWS Lambda environment variable with Python boto3

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.

Is it possible to rename an AWS Lambda function?

I have created some AWS Lambda functions for testing purposes (named as test_function something), then after testing I found those functions can be used in prod environment.
Is it possible to rename the AWS Lambda function? and how?
Or should I create a new one and copy paste source code?
The closest you can get to renaming the AWS Lambda function is using an alias, which is a way to name a specific version of an AWS Lambda function. The actual name of the function though, is set once you create it. If you want to rename it, just create a new function and copy the exact same code into it. It won't cost you any extra to do this (since you are only charged for execution time) so you lose nothing.
For a reference on how to name versions of the AWS Lambda function, check out the documentation here: Lambda function versions
.
You cannot rename the function, your only option is to follow the suggestions already provided here or create a new one and copypaste the code.
It's a good thing actually that you cannot rename it: if you were able to, it would cease to work because the policies attached to the function still point to the old name, unless you were to edit every single one of them manually, or made them generic (which is ill-advised).
However, as a best practice in terms of software development, I suggest you to always keep production and testing (staging) separate, effectively duplicating your environment.
This allows you to test stuff on a safe environment, where if you make a mistake you don't lose anything important, and when you confirm that your new features work, replicate them in production.
So in your case, you would have two lambdas, one called 'my-lambda-staging' and the other 'my-lambda-prod'. Use the ENV variables of lambdas to adapt to the current environment, so you don't need to refactor!
My solution is to export the function, create a new Lambda, then upload the .zip file to the new Lambda.
My solution for lambda rename, basically use boto3 describe previous lambda info for configuration setting and download the previous lambda function code to create a new lambda, but the trigger won't be set so you need to add trigger back manually
from boto3.session import Session
from botocore.client import Config
from botocore.handlers import set_list_objects_encoding_type_url
import boto3
import pprint
import urllib3
pp = pprint.PrettyPrinter(indent=4)
session = Session(aws_access_key_id= {YOUR_ACCESS_KEY},
aws_secret_access_key= {YOUR_SECRET_KEY},
region_name= 'your_region')
PREV_FUNC_NAME = 'your_prev_function_name'
NEW_FUNC_NAME = 'your_new_function_name'
def prev_lambda_code(code_temp_path):
'''
download prev function code
'''
code_url = code_temp_path
http = urllib3.PoolManager()
response = http.request("GET", code_url)
if not 200 <= response.status < 300:
raise Exception(f'Failed to download function code: {response}')
return response.data
def rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME):
'''
Copy previous lambda function and rename it
'''
lambda_client = session.client('lambda')
prev_func_info = lambda_client.get_function(FunctionName = PREV_FUNC_NAME)
if 'VpcConfig' in prev_func_info['Configuration']:
VpcConfig = {
'SubnetIds' : prev_func_info['Configuration']['VpcConfig']['SubnetIds'],
'SecurityGroupIds' : prev_func_info['Configuration']['VpcConfig']['SecurityGroupIds']
}
else:
VpcConfig = {}
if 'Environment' in prev_func_info['Configuration']:
Environment = prev_func_info['Configuration']['Environment']
else:
Environment = {}
response = client.create_function(
FunctionName = NEW_FUNC_NAME,
Runtime = prev_func_info['Configuration']['Runtime'],
Role = prev_func_info['Configuration']['Role'],
Handler = prev_func_info['Configuration']['Handler'],
Code = {
'ZipFile' : prev_lambda_code(prev_func_info['Code']['Location'])
},
Description = prev_func_info['Configuration']['Description'],
Timeout = prev_func_info['Configuration']['Timeout'],
MemorySize = prev_func_info['Configuration']['MemorySize'],
VpcConfig = VpcConfig,
Environment = Environment,
PackageType = prev_func_info['Configuration']['PackageType'],
TracingConfig = prev_func_info['Configuration']['TracingConfig'],
Layers = [Layer['Arn'] for Layer in prev_func_info['Configuration']['Layers']],
)
pp.pprint(response)
rename_lambda_function(PREV_FUNC_NAME , NEW_FUNC_NAME)

Using Win32_ScheduledJob to create jobs

I'm using the following command to create a job:
wmic job call create "C:\Windows\system32\defrag.exe",0,127,FALSE,TRUE,"********000000.000000-500"
But I keep getting an error:
Invalid format.
Hint: <paramlist> = <param> [, <paramlist>].
I've seen similar syntax online so I'm a little confused why it isn't working on my system. Elevated prompt to administrator to further test.
I have noticed the help command for this method seems to be different from the MSDN description.
Help:
Call [ In/Out ]Params&type Status
==== ===================== ======
Create [IN ]Command(STRING) (null)
[IN ]DaysOfMonth(UINT32)
[IN ]DaysOfWeek(UINT32)
[IN ]InteractWithDesktop(BOOLEAN)
[IN ]RunRepeatedly(BOOLEAN)
[IN ]StartTime(DATETIME)
[OUT]JobId(UINT32)
MSDN Link:
https://msdn.microsoft.com/en-us/library/aa389389(v=vs.85).aspx
Trying to avoid the use of PowerShell (Get-WmiObject). Thanks all!
You should specify each property name as well:
wmic job call create Command="C:\Windows\system32\defrag.exe",DaysOfMonth=0,DaysOfWeek=127,InteractWithDesktop=FALSE,RunRepeatedly=TRUE,StartTime="********000000.000000-500"
Executing (Win32_ScheduledJob)->Create()
Method execution successful.
Out Parameters:
instance of __PARAMETERS
{
JobId = 1;
ReturnValue = 0;
};
Also DaysOfMonth=0 and DaysOfWeek=127 are incorrect values according to MSDN.

How to increase deploy timeout limit at AWS Opsworks?

I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...